Results 1 to 2 of 2
Hello everyone, I have been following the forum for a while now. This is my first post here. I work on Linux for ARM processor for cable modem. There is ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
- 09-06-2010 #1
- Join Date
- Apr 2010
Memory optimization for child processes
I have been following the forum for a while now. This is my first post here.
I work on Linux for ARM processor for cable modem. There is a tool that I have written (as the job demands) that sends/storms customized UDP packets using raw sockets. I form the packet from scratch so that we have the flexibility to play with different options. This tool is mainly for stress testing routers.
The details are here.
I actually have multiple interfaces created. Each interface will obtain IP addresses using DHCP. This is done in order to make the modem behave as virtual customer premises equipment (vcpe).
When the system comes up, I start those processes that are asked to. Every process that I start will continuously send packets. So process 0 will send packets using interface 0 and so on. Each of these processes that send packets would allow configuration (change in UDP parameters and other options at run time). Thats the reason I decide to have separate processes.
I start these processes using fork and excec from the provisioning processes of the modem.
The problem now is that each process takes up a lot of memory. Starting just 3 such processes, causes the system to crash and reboot.
I have tried the following:-
1-I have always assumed that pushing more code to the Shared Libraries will help. So when I tried moving many functions into shared library and keeping minimum code in the processes, it made no difference to my surprise.
2-I also removed all arrays and made them use the heap. However it made no difference. This maybe because the processes runs continuously and it makes no difference if it is stack or heap?
3-I suspect the process from I where I call the fork is huge and that is the reason for the processes that I make result being huge. I am not sure how else I could go about. say process A is huge -> I start process B by forking and excec. B inherits A's memory area. So now I do this -> A starts C which inturn starts B will also not help as C still inherits A?. I used vfork as an alternative which did not help either. I do wonder why.
I would appreciate if someone give me tips to help me reduce the memory used by each independent child processes.
Kindly do let me know if you need more details or clarification.
Last edited by Cabhan; 09-06-2010 at 03:59 PM. Reason: Moved to Programming forum.
- 09-06-2010 #2
- Join Date
- Apr 2009
- I can be found either 40 miles west of Chicago, or in a galaxy far, far away.
These are complex programming problems that properly belong in the "Linux Programming & Scripting" forum. That said, I will make a couple of observations and comments (after 30 years of embedded and realtime experience, including on ARM systems).
1. You need to determine if the problem is caused by code, stack, or heap. This is an RCA (Root-Cause Analysis) problem. 2. If code, your approach should help, but apparently it doesn't. If stack (automatic variables, recursion, depth of call stack), you will will need to refactor your code (possibly). If heap, you will need to reduce your use of dynamically allocated memory.Sometimes, real fast is almost as good as real time.
Just remember, Semper Gumbi - always be flexible!