A challenge for Linux: extremely fast response times
I wish to minimize response latency for the data/event flow described below. There is much semi related google-able literature on CPU shielding, preemptive kernel scheduling, realtime priorities, network card coalescing, etc.
But can anyone suggest a definitive and proven path to achieve world class response times on Linux?
1. The machine NIC receives a small amount of data, say < 5k, via TCP or UDP.
2. A listening process waiting for this data wakes up, briefly parses the data and distributes it to 'client' processes running on the same machine via unix sockets, shared memory+semaphores or another suggested mechanism.
3. Client processes wake up and process the fresh data.
The goal is for data to reach step 3 as quickly as possible, as in less than 30 microseconds, not including parsing overhead in step 2. No network cards sitting on the data, no sluggish response to the NIC interrupt, and no slow IPC or scheduler delays getting from step 2 -> 3.
How can we configure our linux environment to meet the goal? These machines are dual xeon 5550s doing nothing of importance other than the task above, so they have ample horse power. We have no qualms about moving most interrupt servicing to a single core. We've tried shielding CPUs, setting realtime priorities, and so forth, but haven't gotten as good of results as we hoped.
We're willing to use pretty much any kernel configuration required, but would like to avoid writing code in kernel space a la realtime linux.
Your thoughts and experience are most kindly appreciated!