Find the answer to your Linux question:
Results 1 to 10 of 10
I have a multi-threaded program that runs on linux, Solaris and AIX. It simulates logic circuits. For one such simulation, the program reports that it used 2.2 GB of memory. ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Jun 2008
    Location
    North East U.S.
    Posts
    30

    Memory usage reporting when using pthreads in 2.6


    I have a multi-threaded program that runs on linux, Solaris and AIX. It simulates logic circuits. For one such simulation, the program reports that it used 2.2 GB of memory. On linux 2.6 only, the ps -lm command (and top) report the process using 14.2 GB. Solaris and AIX report about 2.4 GB in use. I suspect linux is misreporting the memory usage. In this case I am using 4 threads. I also run single threaded and ps -l reports 2.4 GB in use. I don't know why ps reports this process using 12 GB more than it is. Any suggestions? Note that in linux 2.4 it reported 4 processes all using the same amount of memory, so it has definitely improved in 2.6 kernel. This machine is using kernel 2.6.9-42.ELsmp.

    Any help would be appreciated. Thanks!

  2. #2
    Just Joined!
    Join Date
    Jun 2008
    Posts
    34
    The following command might give us some clue:
    pmap <pid> where <pid> is the process id of your 2G program in question.

    It would also be interesting to see what's the difference(pmap output) when the program is run with multi and single thread.

    -Steve

  3. #3
    Just Joined!
    Join Date
    Jun 2008
    Location
    North East U.S.
    Posts
    30
    Steve,

    It looks from pmap that each of 15 threads created are being allocated 820MB, probably for their own stack areas? There appears to be a 4K page allocated below each of these, which could be used to catch stack overflows. The main process thread has a 1MB stack and pmap shows this area being of type [stack] while the other 15 820MB areas show as simply [anon], but since there are 15 threads added to the main thread (total of 16) and there are exactly 15 of these 819200KB [anon] allocations, they seem to be associated with each of the created threads.

    My application is not calling any of the pthread_attr_setstack functions, so it would seem that on linux I'm getting some very large stack sizes. This application does pass an attr parameter to pthread_create that is initialized via pthread_attr_init, but I cannot see anything that is setting the stack size for each thread, so they should be the default size. Where is the default thread stack size set?

  4. $spacer_open
    $spacer_close
  5. #4
    Just Joined!
    Join Date
    Jun 2008
    Posts
    34
    Hi,
    Could you check "ulimit -s" (login as the user that run your simulation program) to see your user limit for the stack?
    I have a 2.6.22.5 kernel and my default stack max is 8192k.
    If you get 819200(i.e. 819200k), you can change it to a smaller size using ulimit, re-run your program and use pmap to see the effect.

    -Steve

  6. #5
    Just Joined!
    Join Date
    Jun 2008
    Location
    North East U.S.
    Posts
    30
    Steve,

    That's it! This program is being invoked from a script that was apparently setting ulimit -s 819200 and that appears to be where the default thread stack size is coming from. Odd that the only memory area explicitly labeled as [stack] from pmap was only 1MB in size given the ulimit setting. This same script is used on AIX and Solaris, so I'm not sure why this problem doesn't show up on those systems.

    Thanks for your help!

    -Brion

  7. #6
    Just Joined!
    Join Date
    Jun 2008
    Posts
    34
    Brion,

    >Odd that the only memory area explicitly labeled as [stack] from pmap was only 1MB in size given the ulimit setting.
    While the process stack(your 1MB) is managed by Linux kernel, the thread stacks are managed by the pthread library which will allocate a memory region of size equal to the ulimit(if user program do not specify stack size with thread attribute) for each thread.
    Not sure what version of Solaris you are running. pmap on OpenSolaris 11 will label thread stack with thread id. You might want to take a look at it yourself.
    You might also check the output for "svmon -P <PID>" on AIX.

    >This same script is used on AIX and Solaris, so I'm not sure why this problem doesn't show up on those systems.
    You mentioned you used "ps -ml" to check the memory statistics on Linux. I assume you are refering to the number in the "SZ" column.
    I checked "ps -eLl" command on OpenSolaris 11(similar to ps -ml on Linux) and found that the "SZ" column report allocated memory within a memory region rather than the total size of the memory region. If we use "pmap -r <PID>" on Solaris, it will give us something similar to what "SZ" is in "ps -ml" on Linux.
    We need to be careful about the exact meaning of the ps columns when we compare the ps output from different OS.

    -Steve

  8. #7
    Just Joined!
    Join Date
    Jun 2008
    Location
    North East U.S.
    Posts
    30
    Steve,

    I'm now explicitly setting thread stack size to 1MB and the problem is resolved. The issue came up becuase a user was monitoring progress of the process via the top command and it was showing a VIRT amount many times higher than we actually were using on linux. This was not happening on AIX or Solaris with the top command. It may still have been true that we were getting huge stack spaces reserved for each thread, but the top command on Solaris reports a SIZE that was not including that virtual space. The pmap command on Solaris did not show any 819MB areas allocated even though ulimit -s is set to 819200 K. Also, the top command and ps command on AIX did not show the (stack) space. AIX appears to not have a pmap command.

    -Brion

  9. #8
    Just Joined!
    Join Date
    Jun 2008
    Posts
    34
    Brion,

    Good to hear you have resolved the problem.

    >The pmap command on Solaris did not show any 819MB areas allocated even though ulimit -s is set to 819200 K.
    I assume you are referring to "pmap -r" (since pmap without any flag will report different statistics as I mentioned). No, it won't show the ulimit. It will have some default value(1 MB on OpenSolaris 11).This is reasonable since the pthread library in Linux and Solaris are two different implementations. Btw, "pmap -r" on solaris will show the process stack(the one without tid) with the ulimit value.

    >AIX appears to not have a pmap command.
    AIX does not have a pmap command. That's why I suggested you to use "svmon -P <PID>" instead.

    -Steve

  10. #9
    Just Joined!
    Join Date
    Jun 2008
    Location
    North East U.S.
    Posts
    30
    Thanks for your help Steve! I find it odd that with ulimit -s 819200 on all platforms, none of them seem to actually allocate 819 MB for the stack for the main thread. Perhaps they do allow the stack to dynamically grow to that size, but do not allocate or reserve the space until it is used for the first time.

    -Brion

  11. #10
    Just Joined!
    Join Date
    Jun 2008
    Posts
    34
    Brion,

    I find it odd that with ulimit -s 819200 on all platforms, none of them seem to actually allocate 819 MB for the stack for the main thread. Perhaps they do allow the stack to dynamically grow to that size, but do not allocate or reserve the space until it is used for the first time.
    You are right. The main thread is similar to a traditional single threaded process. The ulimit -s only sets the maximum. A stack overflow will be detected by the system when the process stack grows to exceed the maximum.

    -Steve

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •