Find the answer to your Linux question:
Results 1 to 6 of 6
Hi! I'm new in linux so I'm wondering how can I measure time of N processes and N threads and then compare this time to prove that threads are faster ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    T2R
    T2R is offline
    Just Joined!
    Join Date
    Oct 2010
    Location
    Croatia
    Posts
    7

    Process VS Thread


    Hi!

    I'm new in linux so I'm wondering how can I measure time of N processes and N threads and then compare this time to prove that threads are faster than processes.
    I would be thankful for some easy understanding C code, or also for some good way to measure time of N processes and N threads for C.

    Regards!

  2. #2
    Linux Enthusiast gerard4143's Avatar
    Join Date
    Dec 2007
    Location
    Canada, Prince Edward Island
    Posts
    714
    First question. Is this code running on a multi-core processor?
    Make mine Arch Linux

  3. #3
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,380
    Quote Originally Posted by gerard4143 View Post
    First question. Is this code running on a multi-core processor?
    Good question. Relatively speaking, the comparison should not matter whether it is on single vs. multi-core, but it may. Both situations should be investigated for a real comparison. In any case, you need to be sure that you are timing the function portions of the code, not to load/start/shutdown parts. That code should be identical in both the multi-threaded and multi-task versions. Also, you need to be sure that you are not using any constructs that require synchronized access to resources, otherwise locking/semaphores/mutexes will skew the results. In effect, you are trying to compare the context switch overhead of threads vs. processes. In my experience, it is on the aggregate a wash. The difficulties of coding threaded applications to avoid resource contention often exceeds the benefits of the performance gain you get vs. a share-nothing multi-process architecture. I have designed and developed major systems using both approaches, and personally, I can get better time-to-market and more reliable systems with a shared-nothing multi-process architecture. Coding is simpler, debugging is easier, and with modern CPU architectures the performance delta is just not worth the headaches.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  4. #4
    T2R
    T2R is offline
    Just Joined!
    Join Date
    Oct 2010
    Location
    Croatia
    Posts
    7
    It really dosen't matter if is single or dual core, but I will run it on dual core processor. I'm wondering how the source should like, so I can also run this program on the single core processor. Threads should be faster because they have less information to save on stack, but I want to know diffrence in time. Thank you for answer!

  5. #5
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,380
    Quote Originally Posted by T2R View Post
    It really dosen't matter if is single or dual core, but I will run it on dual core processor. I'm wondering how the source should like, so I can also run this program on the single core processor. Threads should be faster because they have less information to save on stack, but I want to know diffrence in time. Thank you for answer!
    This sort of study has been done extensively over the past 20 years. The performance delta when all other things are equal depends a lot upon whether the system is a CISC or RISC architecture, the instruction set (some modern processors have fast context switch instructions), cache availability, CPU/Memory bus speed, etc. Just tuning the testing environment is a lengthy process in order to make the test verifiable and repeatable. I know about all this because I was the performance testing engineer manager (started the PE dept) for a major enterprise software vendor for a couple of years until we hired some top-rung talent who only wanted to do performance engineering (hired them away from DEC). We probably invested several engineer-years in getting the test infrastructure (hardware & software) in place so that we could go to DEC, HP, IBM, and SUN performance engineering centers with our gear and software to do repeatable performance tests and analysis that actually showed us something. One time, our tests uncovered a hardware bug in HP's big-iron ethernet controllers when we ramped the tests up to "stress" levels, so you never know what supposedly "simple" tests will uncover.

    What did we learn thru all of this to get consistent, repeatable, analyzable results?

    1. Test software design is critical. I've had to go as far as to write our own scheduler in order to keep systems fed in a manner as a real-world scenario would behave.
    2. Hardware infrastructure setup when testing the same hardware family is crucial.
    3. When running on SMP architectures, allocate 1 CPU (or core) per major application server.
    4. When #3 is followed, the performance delta between coarse grained multi-threading (multi-process shared-nothing architecture) vs. fine grained multi-threading (real threads in 1 process) is not worth the engineering work to get fine grained MT to run reliably without deadlocks and resource contention.

    Good luck!
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  6. #6
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,380
    FWIW, the systems I described run to thousands of concurrent users and control major manufacturing plants world-wide. If it has a chip, disc drive, or flat-panel display in/on it, then the software we produced built it.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •