Results 1 to 3 of 3
Thread: Effect of changing stack size
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Aug 2012
Effect of changing stack size
While the stack size in user space is set to 8M, we can see that an application using 132K of it, and if we set the stack size limit to 32K (with ulimit) then the application uses 28K of it.
So wonder that
- What is the meaning of those numbers, I mean why 132K while stack max is 8M and 28K when stack max is 32K?
- what is the effect of reducing stack on application's execution.
Thank you in advance.
Last edited by shahradp; 03-27-2013 at 02:37 AM.
- Join Date
- Apr 2009
- I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
Consider it a prediction of usage. If you have 8MB of available stack space, then the system will "predict" that you will use at least 132k (give or take). If you reduce that space, then it will adjust downward accordingly. This allows the operating system to minimize kernel-level memory allocations/re-allocations. Remember, stack is used for two things primarily - one is for automatic variables declared inside functions, and the other is for function/argument storage, important when doing recursive operations. If the kernel has to reallocate stack space, it can be very expensive from a performance perspective as it not only has to reallocate physical/virtual memory, but it also has to "fix up" all the pointers into that space. So, ONLY change the default stack ulimit if you REALLY know what you are doing, and why!Sometimes, real fast is almost as good as real time.
Just remember, Semper Gumbi - always be flexible!
- Join Date
- Aug 2012
This question came to us when we saw that if we reduce maximum stack size, we can see significant change in memory foot print of process.
Apparently changing effects to all library threads that the process is using, so now the questions is
- Why changing the maximum size of stack that already effect the stack size of process (which you described why (BTW shouldn't be this change proportional ?), also effects [ anon ] section and also library's memory space.
Here is an example of part of process memory on 8M and 32K maximum stack size
401af000 40K r-x-- /usr/lib/libgcc_s.so.1
401b9000 28K ----- /usr/lib/libgcc_s.so.1
401c0000 4K r---- /usr/lib/libgcc_s.so.1
401c1000 4K rw--- /usr/lib/libgcc_s.so.1
401f3000 80K r-x-- /lib/libpthread-2.8.so
40207000 28K ----- /lib/libpthread-2.8.so
4020e000 4K r---- /lib/libpthread-2.8.so
4020f000 4K rw--- /lib/libpthread-2.8.so
40210000 8K rw--- [ anon ]
40212000 1188K r-x-- /lib/libc-2.8.so
4033b000 28K ----- /lib/libc-2.8.so
40342000 8K r---- /lib/libc-2.8.so
40344000 4K rw--- /lib/libc-2.8.so
40345000 12K rw--- [ anon ]
40348000 4K ----- [ anon ]
40349000 8188K rw--- [ anon ]
40b48000 4K ----- [ anon ]
40b49000 8188K rw--- [ anon ]
be9ee000 132K rw--- [ stack ]
40131000 40K r-x-- /usr/lib/libgcc_s.so.1
4013b000 28K ----- /usr/lib/libgcc_s.so.1
40142000 4K r---- /usr/lib/libgcc_s.so.1
40143000 4K rw--- /usr/lib/libgcc_s.so.1
4015e000 476K r-x-- /lib/libm-2.8.so
401d5000 28K ----- /lib/libm-2.8.so
401dc000 4K r---- /lib/libm-2.8.so
401dd000 4K rw--- /lib/libm-2.8.so
401de000 4K ----- [ anon ]
401df000 28K rw--- [ anon ]
401e6000 4K ----- [ anon ]
401e7000 28K rw--- [ anon ]
40256000 80K r-x-- /lib/libpthread-2.8.so
4026a000 28K ----- /lib/libpthread-2.8.so
40271000 4K r---- /lib/libpthread-2.8.so
40272000 4K rw--- /lib/libpthread-2.8.so
40273000 8K rw--- [ anon ]
40275000 1188K r-x-- /lib/libc-2.8.so
4039e000 28K ----- /lib/libc-2.8.so
403a5000 8K r---- /lib/libc-2.8.so
403a7000 4K rw--- /lib/libc-2.8.so
403a8000 12K rw--- [ anon ]
bef96000 28K rw--- [ stack ]
ffff0000 4K r-x-- [ anon ]
Appreciate any reply.