Results 1 to 1 of 1
Hello, I'm aware that this topic came up a few times in the past (for example in this thread http://kerneltrap.org/node/3039/ ), but i still can not find a satisfactory solution ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
- 07-13-2005 #1
- Join Date
- Jul 2005
Random I/O performance on Linux
I'm aware that this topic came up a few times in the past (for example in this thread http://kerneltrap.org/node/3039/), but i still can not find a satisfactory solution (or at least a resonable explanation) to the problem.
The application i use, generates fully random I/O read/write requests, 64 KB each (fixed). Around 60% of all operations are read. For test purposes i use IBM's DS400 SAN RAID, with 15x146GB SCSI drives installed. The RAID is connected via 2Gbit/sec Fibre-Channel link to my test system. The test system is a dual-Xeon 2.4Ghz with 3GB RAM. It has a Qlogic 2340 FC adapter that connects it to the SAN. The RAID box is configured not to use RAID functionality, i.e. each of the 15 drives has its own LUN.
I use the latest version of iometer to test the performance of this set-up. All drives combined under such load are supposed to provide ~1100 I/O operatios per second. IOmeter under Windows 2003 reaches ~950 IOPS and under Solaris 10 ~750 IOPS (on exactly same hardware setup).
When running the same workload under Linux (latest Fedora Core 4, with kernel 2.6.11) i only get a pathetic 200 IOPS. I've tried the whole range of 2.6.x kernels, starting from 2.6.5 and always got the same results. I also tried to switch elevators between deadline, CFQ and noop, but it had almost no effect. I also tried to bring in the latest Qlogic drivers (including beta), play with various driver settings, including "queue depth", and drive settings (through sdparm, and blockdev), but i've never achieved any significant performance improvement. Interestingly, when i switched from random I/O to sequential I/O i've been able to achieve close to ~1000 IOPS on 2.6. I know that the hardware/driver is not a bottleneck, so why should it matter for the kernel which I/O pattern i'm using?
With kernels 2.4.x i reached close to 400 random IOPS, which is much better, but still significantly less than hardware capability and performance observed on Windows and Solaris.
Is there any chance to improve the results to at least 700 IOPS (which is our target) on any Linux setup? At this point we're pretty desperate and planning to switch to alternative OS.