Debugging a 100% problem with iowait on Linux

I am trying to trace why I have 100% iowait on my box. If I do something like a query to select mysql, the system switches to 100% iowait (more than one processor on my server), which kills my watchdogs and sometimes kills httpd itself.

In vmstat, I see that every 8 seconds or so there is a 5 MB write. And this forces at least one processor (out of 4) to block one or two seconds.

I have to say that there are several million files in my ext3 (and I tried ext2, and I don't have atime, and there were no logs.) There is a hardware raid reflecting two 300GB ides.

I miss dtrace. Is there any way to find out what causes these entries? and how do I speed up the file system?

Ideas are welcome!

Thanks!

+4
source share
2 answers

Use iotop .

+7
source

OK, possible diagnostic steps (for posterity):

  • Have you confirmed that you actually have not run out of virtual memory and, therefore, have crowded out processes to disk?

  • If this is not a kernel replacement, you can use strace (since you don't have dtrace ) to prove if it is executing MySQL records

Can you provide more detailed information about the hardware and O / S configuration?

0
source

All Articles