Skip to main content
basically
Source Link
rogerdpack
  • 587
  • 2
  • 8
  • 24

iostat similarly shows the disk as basically idle:

iostat similarly shows the disk as idle:

iostat similarly shows the disk as basically idle:

clarify
Source Link
rogerdpack
  • 587
  • 2
  • 8
  • 24

As a side note, load average also includes things waiting for disk activity (i.e. harassing the disk) as well as those waiting for cpu, it's thea sum of both...so you might have problems in one or the other.

As a side note, load average also includes things waiting for disk activity (i.e. harassing the disk) as well as those waiting for cpu, it's the sum of both...

As a side note, load average also includes things waiting for disk activity (i.e. harassing the disk) as well as those waiting for cpu, it's a sum of both...so you might have problems in one or the other.

Source Link
rogerdpack
  • 587
  • 2
  • 8
  • 24

As a side note, load average also includes things waiting for disk activity (i.e. harassing the disk) as well as those waiting for cpu, it's the sum of both...

See http://en.wikipedia.org/wiki/Load_(computing) "Linux also includes [in its load average] processes in uninterruptible sleep states (usually waiting for disk activity)"

As a side note, the particular problem I ran into was that I had high load average, but also lots of idle cpu and low disk usage.

It appears that, at least in my case, sometimes threads/processes waiting for I/O show up in the load average, but do not cause an increase in the "await" column. But they're still I/O bound.

You can tell that this is the case with the following code, if you run it in jruby (just does 100 threads with lots of I/O each):

100.times { Thread.new { loop { File.open('big', 'w') do |f| f.seek 10_000_000_000; f.puts 'a'; end}}} 

Which gives a top output like this:

top - 17:45:32 up 38 days, 2:13, 3 users, load average: 95.18, 50.29, 23.83 Tasks: 181 total, 1 running, 180 sleeping, 0 stopped, 0 zombie Cpu(s): 3.5%us, 11.3%sy, 0.0%ni, 85.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32940904k total, 23239012k used, 9701892k free, 983644k buffers Swap: 34989560k total, 0k used, 34989560k free, 5268548k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31866 packrd 18 0 19.9g 12g 11m S 117.0 41.3 4:43.85 java 912 root 11 -5 0 0 0 S 2.0 0.0 1:40.46 kjournald 

So you can see that it has lots of idle cpu, 0.0%wa, but a very high load average.

iostat similarly shows the disk as idle:

avg-cpu: %user %nice %system %iowait %steal %idle 9.62 0.00 8.75 0.00 0.00 81.62 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 49.00 0.00 6.40 0.00 221.60 69.25 0.01 0.81 0.66 0.42 sda1 0.00 49.00 0.00 6.40 0.00 221.60 69.25 0.01 0.81 0.66 0.42 sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 

see also http://linuxgazette.net/141/misc/lg/tracking_load_average_issues.html

As a further side note, this also seems to imply that (at least in this case--running CentOS) the load average includes each thread separately in the total.