search
top

Gathering System Statistics for Linux using SAR

You know how it happens, you are working on a killer bash script and all of the sudden your manager is standing at your cube asking you to look at slow performing server. It happens all of the time, but luckily we have many tools at our disposal and one such tool is Sar.

If you are needing I/O, CPU stats and other data from today or several days back, then sar is the tool to use for Linux. If you are needing graphing and alerting then Sar is not the tool you are wanting, for that you need to look at Nagios or other monitoring tools. But, for an administrator needing data to troubleshoot and gauge what the server is doing, then Sar is the tool. Sar is part of the sysstat package and can be installed in Fedora or RHEL, it i also available on almost all distributions.

$ sudo yum install sysstat

Once installed, it will be enabled by default. Sar will log seven days of statistics by default and compress after 10 days. If you want to log more than that, you can edit /etc/sysconfig/sysstat and change the HISTORY option. You can also set compression for the logs after so many days. History and compression settings come in handy for managing log rotations.

# sysstat-10.0.3 configuration file.
# How long to keep log files (in days)./etc/cron.d/0hourly
# If value is greater than 28, then log files are kept in
# multiple directories, one for each month.
HISTORY=7

# Compress (using gzip or bzip2) sa and sar files older than (in days):
COMPRESSAFTER=10

Once sysstat is configured and enabled, it will collect statistics about your system every ten minutes and store them in a logfile under /var/log/sa via a cron job in /etc/cron.d/sysstat. There is also a daily cron job that will run right before midnight and rotate out the day’s statistics, this is in /etc/cron.d/0hourly. By default, the logfiles will be date-stamped with the current day of the month, so the logs will rotate automatically and overwrite the log from a month ago.
By typing sar with no parameters it will display the current day’s CPU statistics, if you have just installed it you will need to wait sometime for stats to be gathered.

$ sar
07:00:01 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
07:10:01 AM     all      0.60      0.02      0.76      3.17      0.00     95.45
07:20:01 AM     all      0.03      0.03      0.54      3.90      0.00     95.51
07:30:02 AM     all      2.75      0.00      1.72      6.99      0.00     88.53
07:40:01 AM     all      0.02      0.00      0.07      0.02      0.00     99.89
07:50:01 AM     all      0.02      0.00      0.08      0.02      0.00     99.88
08:00:01 AM     all      0.13      0.01      0.18      0.35      0.00     99.34
08:10:01 AM     all      0.07      0.00      0.13      0.03      0.00     99.76
08:20:01 AM     all      0.01      0.00      0.06      0.02      0.00     99.91
08:30:01 AM     all      0.01      0.00      0.06      0.02      0.00     99.91
08:40:01 AM     all      0.01      0.00      0.05      0.02      0.00     99.92
08:50:01 AM     all      0.01      0.00      0.06      0.03      0.00     99.89
09:00:01 AM     all      0.01      0.00      0.05      0.04      0.00     99.90

Using the -r option sar will display RAM statistics.

$ sar -r
07:00:01 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact
07:10:01 AM    402912   1096168     73.12     45204    618888   1143864     25.34    492336    486420
07:20:01 AM    310780   1188300     79.27     92672    628116   1126108     24.95    524772    509792
07:30:02 AM    216740   1282340     85.54     87036    697792   1172404     25.97    494168    631360
07:40:01 AM    237720   1261360     84.14     87068    697820   1134920     25.14    474132    630852
07:50:01 AM    237844   1261236     84.13     87116    697824   1134920     25.14    474140    630896

Using -b option you can get disk I/O data from the past

$ sar -b
07:00:01 AM       tps      rtps      wtps   bread/s   bwrtn/s
07:10:01 AM     15.73     13.77      1.97   1701.20     22.60
07:20:01 AM     44.80     38.04      6.76    367.22    123.17
07:30:02 AM     26.66     21.74      4.92   1188.71    992.72
07:40:01 AM      0.19      0.01      0.18      0.08      2.49
07:50:01 AM      0.20      0.00      0.20      0.00      2.67
08:00:01 AM      1.63      0.77      0.86     49.86      9.96
08:10:01 AM      0.27      0.02      0.25      0.48      3.15
08:20:01 AM      0.19      0.00      0.19      0.00      2.56

Retrieving Older Data

Using -s (start) and -e (end) options you can retrieve data from past days.If you want to get information a few days in the past. For example if you want data from 12:00 to 12:30 your syntax would be:

$ sar -s 12:00:00 -e 12:30:00
12:00:01 PM     CPU     %user     %nice   %system   %iowait    %steal     %idle
12:10:01 PM     all      0.05      0.00      0.10      0.02      0.00     99.83
12:20:01 PM     all      0.02      0.00      0.06      0.03      0.00     99.89
Average:        all      0.03      0.00      0.08      0.02      0.00     99.86

You also can add all of the normal sar options when pulling from past logfiles, so you could run the same command and add the -r argument to get RAM statistics:

$ sar -s 17:00:00 -e 17:30:00 -f /var/log/sysstat/sa01 -r

This just covers a few of the options available with Sar. Be sure to use and add this powerful tool to your bag of tricks in getting your systems to run to peak performance and find those trouble areas.

2 Responses to “Gathering System Statistics for Linux using SAR”

  1. Charles Stepp says:

    Note that the default interval for sa1 is one second. If you put a crontab entry to run sa1 each 10 minutes without specifying the interval, you get what the cpu, etc. was for one second out of that 10:

    In the crontab entry, you should not be limiting the interval to 1 second. Sar uses the same system resources no matter how long the interval is. It reads kernel values, sleeps, reads the values again and records/prints the difference value. 1 second, 10 seconds, 1200 seconds are the same as far as sar’s resource usage. 99.99% of sar’s usage is sleep, which is what the kernel does anyway when it’s not doing anything. Note below that the first sar sample of only a second showed an average cpu of 3%. The longer samples, averaging over a longer period, show that 6% is probably more of an accurate average, at this time. The web pages I’ve seen so far feed each other with this 1 second sample thing, almost like someone is afraid sar might bog the system down. It won’t. The same two sets of kernel reads happens no matter what the interval is:
    time sar 1 1; time sar 10 1; time sar 100 1
    Linux 2.6.18-194.el5 (blahblah) 10/07/14
    12:04:51 CPU %user %nice %system %iowait %steal %idle
    12:04:52 all 3.00 0.00 0.75 0.00 0.00 96.25
    Average: all 3.00 0.00 0.75 0.00 0.00 96.25
    sar 1 1 0.00s user 0.00s system 0% cpu 1.005 total
    Linux 2.6.18-194.el5 (blahblah) 10/07/14
    12:04:52 CPU %user %nice %system %iowait %steal %idle
    12:05:02 all 6.21 0.00 0.93 0.20 0.00 92.67
    Average: all 6.21 0.00 0.93 0.20 0.00 92.67
    sar 10 1 0.00s user 0.00s system 0% cpu 10.005 total
    Linux 2.6.18-194.el5 (blahblah) 10/07/14
    12:05:02 CPU %user %nice %system %iowait %steal %idle
    12:06:42 all 6.32 0.00 0.97 0.24 0.00 92.47
    Average: all 6.32 0.00 0.97 0.24 0.00 92.47
    sar 100 1 0.00s user 0.00s system 0% cpu 1:40.01 total
    From the man page example it shows each hour having 3 20 minute samples. This provides accurate averaging and small sa## files. A 1 second interval each 10 minutes is 1/600th of the information available.
    EXAMPLES
    To create a daily record of sar activities, place the following entry
    in your root or adm crontab file:
    0 8-18 * * 1-5 /usr/lib/sa/sa1 1200 3 &

  2. Sonu says:

    Thanks for the nice tutorial.
    Can you please share the way to get stats for multiple days in a single file so that it can be imported in ksar for analysis

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

top