I have an interesting issue going on. I have to VMWare ESXi 4.1U1 servers each loaded up with hard drives. Here are the two configurations:

Server 1:
Gigabyte GA-MA790XT-UD4P
Phenom II X6 1090T
16GB DDR3
4x Samsung F4 2TB 5400RPM Drives

Server 2:
Gigabyte GA-880GMA-UD2H
Phenom II X4 965
8GB DDR3
4x Seagate Barracuda ES.2 500GB 7200RPM Drives

Each server has a 5th hard drive that is used for VMWare and the Guest systems.

Since I'm too cheap to buy a real RAID card, what I'm doing is using a single datastore to fill each disk, and then a single virtual disk in each datastore, and then presenting those to Ubuntu 10.04.2 and using Linux software RAID. I am aware that it is possible to give the O/S raw access to a disk, but I haven't gotten that far yet.

Anyhow, Server 1 uses RAID5 and Server 2 is a play toy, so right now I have it set up for RAID0. The performance on Server 1 is phenomenal - even with RAID5 I get 200+ MB/s read and write to the array of 5400 RPM drives. The read performance on Server 2 is fine - about 300MB/s, which is what I'd expect. But, write performance is an order of magnitude lower at around ~30MB/sec. I certainly would not expect this out of a RAID0 setup on enterprise class drives.

I don't think VMWare has anything to do with it because performance is so good on Server 1, which also has an older chipset (790X+SB750) vs. the 880G+SB850 on Server 2.

I thought I would float this out there to see if I could get some ideas about what might be going on.

Thanks in advance