Results 1 to 2 of 2
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Mar 2010
Tutorial on performance monitoring
I've found many howtos on using tools like iostat, but they don't give any examples of good performance vs bad.
I know it varies greatly based on the invidual system's configuration, but aren't any tutorials I've found that have a sample system as an example and say something like "this server has 4 disks in a raid 5 array so a good avgqu-sz value from iostat should be between <whatever>"
I feel like many of the tutorial I've found assume a knowledge of system internals and monitoring I do not already have.
Could you recommend a good starting point?
Thanks for any responses.
- Join Date
- Apr 2009
- I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
This subject is why I get paid a six-figure (USD) salary by one of the biggest companies in the world! The simple answer is that there are a LOT of resources on the internet to help you with this, but the basic stuff for Linux systems is the sysstat package. That can provide a lot of data on how your system is performing with regard to CPU, I/O, network and other usages. There are also other tools to use than can provide a nice view into how the system is acting, such as Ganglia. FWIW, sysstat and Ganglia simple look at the data in /proc on a periodic basis and then record and/or display that for the user.
As for your specific questions, I have to say that what is "good" vs. "bad" depends upon WAAAAY too many factors to go into here. I have spent many years gaining this knowledge, and even now a lot is due to gut reactions given the system configuration, usage factors, etc. I am in the process of developing serious mathematical models that will allow us to determine when our systems are mis-behaving by monitoring in real time many different factors. A lot of that math is from my engineering background, utilizing things like Kalman filters on the data provided by sysstat, as well as other local factors. Not a subject for the novice or those without suitable background in statistical analysis and calculus.
As for a starting point, unless you have a really strong engineering background and in-depth knowledge of the Linux system, then stick with packaged solutions. There are some really good ones out there, both open source (like Ganglia, Nagios, and sysstat) as well as proprietary monitoring tools. You just have to decide which provides the features that will help you the most. Bear in mind that proprietary tools like HP-Unicenter are very pricey, whereas opensource tools like Nagios and Ganglia provide a lot of the same functionality at a much lower price point. The main difference is support, and how much you are willing to learn on your own.Sometimes, real fast is almost as good as real time.
Just remember, Semper Gumbi - always be flexible!