I have inherited a linux machine used for a wiki, CVS, and general linux compiling and debugging. It is an AMD64 3500+. The kernel version is 2.6.17.9.

The performance issue that is coming up is with the software RAID arrays usually during compiling and large CVS operations. There are 6 ATA drives. Two of these are attached to IDE controllers on the nForce3 250 southbridge. Four more are attached to two ATA controllers on the regular PCI bus. All the drives are masters on their own channel. 4 drives are on ATA133, and 2 are on ATA100.

The two drives on the southbridge IDE are RAID1. The two other RAID1 are split across controllers (primary and secondary IDE on each).

I have noticed a slowdown with iowait during heavy I/O transactions. So I used iowait to get a sense of what is going on.

Here's what happens:

  1. The reads are always fast (md=50 MB/s, hda/hdc=~25MB/s) as they are staggered by the software RAID algorithm. Both drives are read (say hda & hdc). I estimate that 1.5 GB of RAM is used to cache the reads.
  2. The process then immediately shows a very fast write to the md0. (67+MB/s).
  3. The drives contained within the destination RAID array show a slow write (~15MB/s each) for a long time. I believe this is 1.5 GB of cached data being written to disk.


This happens on all RAID arrays.

Overall, copying a 1.5 GB file takes 4 mins. This takes about twice as long as on a single drive system that I compared with.

Any ideas or suggestions?