I came across an idea that could increase the data performance of Linux up to 200% by means of modifications in the FS-Stack.

One of the major advantages of Linux is the availability of Soft-Mirrors to increase data safety. Thus the Data are saved on different harddisks. Unfortunately this decreases the writing-performance slightly because everything must be written twice. To read this Data we only use one hard disk. But the data-Rate could be increased, if the Data were read from two disks. The use of read-ahead-cache voids the usefulness for access on unfragmented files. Therefore it can be used to coordinate different streams or reducing access time by assigning each disk a part of the data.

In further steps there could be an asynchronous writing stack that replicates the changes only after the file is closed (maybe via system cache) or during system idle time and after a crash directly. Therefore the other disks would be available for reading.
This could be used to create a whole new kind of performance orientated mirror with e.g. one master disk and one or two additional slave-disks (partitions). Because of the asynchronous writing, Data-inconsistence can by solved by the master. Failure of the master can result in declaring one of the slaves master using the (very big) write-cache to synchronize it. After a proper shutdown, all disks can be assumed consistent, to avoid error caused by start failure.
To synchronize the disks, methods form the CSCW can be used because there are equal consistence problems.

A similar idea could be used to replace Raid5. The enormous complexity of the XOR-Calculation reduces the system performance dramatically. But if you see each disk individually, you can assign each disk its own virtual file system. If you write a file, it is written on two disks (synchronously or asynchronously). We take for example the two disks with the largest free space. Therefore the files are mirrored and the performance is the same or better than Raid1. Of course it is possible that you cant use the full diskspace when saving larger files, but that is a minor Problem when using huge disks.
If designed properly, each disk could even be used outside the array, to access the files on it.
For a rebuild of a broken disk, you search all files that only exist once and copy it to the spare.

This could also be used to create high-speed defragmentation.

P.S. I didn't know why this jumped into the installation section - I wanted it posted in misc.