Find the answer to your Linux question:
Results 1 to 3 of 3
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    A challenge for Linux: extremely fast response times

    I wish to minimize response latency for the data/event flow described below. There is much semi related google-able literature on CPU shielding, preemptive kernel scheduling, realtime priorities, network card coalescing, etc.

    But can anyone suggest a definitive and proven path to achieve world class response times on Linux?


    1. The machine NIC receives a small amount of data, say < 5k, via TCP or UDP.
    2. A listening process waiting for this data wakes up, briefly parses the data and distributes it to 'client' processes running on the same machine via unix sockets, shared memory+semaphores or another suggested mechanism.
    3. Client processes wake up and process the fresh data.

    The goal is for data to reach step 3 as quickly as possible, as in less than 30 microseconds, not including parsing overhead in step 2. No network cards sitting on the data, no sluggish response to the NIC interrupt, and no slow IPC or scheduler delays getting from step 2 -> 3.

    How can we configure our linux environment to meet the goal? These machines are dual xeon 5550s doing nothing of importance other than the task above, so they have ample horse power. We have no qualms about moving most interrupt servicing to a single core. We've tried shielding CPUs, setting realtime priorities, and so forth, but haven't gotten as good of results as we hoped.

    We're willing to use pretty much any kernel configuration required, but would like to avoid writing code in kernel space a la realtime linux.

    Your thoughts and experience are most kindly appreciated!

  2. #2

    Your first step...

    Your first step here would be to try and squeeze more speed out of your network layer. I have done this by writing code but as an alternative you might want to look at the work done by Dr. Luca Deri. He has published on the net at TNAPI - Multithreaded-NAPI. He has a generic approach that can be done by adding his code and making sure you have purchased the correct hardware.

    You will need to read through the article he links to about PF_RING as well. He responds to emails rather quickly and I have found him to be quite helpful.

    The nature of your data is going to be an issue as well. Small packets means that you will be link layer bound (hardware) and large packets means you will be application layer bound (stack). What the nature of your data is will therefore decide where you will spend your energies optimizing.

    The next step will by your application and I can't really help you with that. Hope this helps.


  3. #3
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    You need to use the real-time extensions for the kernel in order to get this sort of response times and process priority assignment. The regular scheduler may, or may not, meet your deadline needs. In any case, please explain more fully what you are trying to accomplish, what type of applications you are developing, and what environment they will be run in.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  4. $spacer_open

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts