Find the answer to your Linux question:
Results 1 to 4 of 4
Hi all, Usually, I observed that when a particular program A eats too much memory by malloc() without free(), it is killed by Linux system and system log will show ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Nov 2004
    Posts
    47

    question about "Out of memory : killed process ..."


    Hi all,

    Usually, I observed that when a particular program A eats too much memory by malloc() without free(), it is killed by Linux system and system log will show the event like "Out of memory : killed process 64 (A)"

    When I myself wrote that A program and know that there are malloc()s without free() in A, I always see the error message that A itself is out of memory and is killed, not other processes . . But now, i have some binaries A, B, C without looking into the source code. A,B ,C are running at a same time. Then I get error "Out of memory: killed process 64(A) Out of Memory: Killed process 30 (rc)." And all A,B,C die. rc is script which started B, C. B started A by system() function. Should I blame the A and not the others for eating too much memory?

    Because I check in malloc() man page something like " Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer."

    It says that one or more processes will be killed, not exactly the process who eat too much memory.

    May some experienced programmers please explain to me and suggest some way for debuging?

    Thanks in advance

  2. #2
    Linux Newbie
    Join Date
    Nov 2004
    Location
    New York
    Posts
    150
    Wait, under linux implementations, there is no guarantee that malloc will return null when the system runs out of memory? I thought that was a requirement of the ANSI standards. Is there an alternative way to detect when memory is not available?
    \"Nifty News Fifty: When news breaks, we give you the pieces.\" - Sluggy Freelance

  3. #3
    Just Joined!
    Join Date
    Nov 2004
    Posts
    47
    I found Linux-MM docs which states about Linux-MM's OOM killer policy here http://linux-mm.org/docs/oom-killer.php.

    Though the policy is stated clear:

    * memory use, the more memory a process is using, the more memory we will free up and the higher the likelyhood that this program is too big for the system and couldn't have run to completion anyway
    o more memory use increases the likelyhood of being killed
    * CPU use, the more processor time a process has used, the more work will be lost if we kill this process
    o more cpu time decreases the chance of being killed
    * time since start time, the longer a process has been running, the more likely it is that the process is stable and not "guilty" of exhausting system resources
    o a longer run time decreases the chance of being killed
    * system administrator rights, usually only trusted programs and important system programs run as root or with capabilities enabled
    o running as root decreases the chance of being killed
    * direct hardware access, killing a process which has direct hardware access may lead to hardware getting confused and the machine hanging; also, programs with direct hardware access are usually important for whatever task the system is doing
    o direct hardware access decreases the chance of being killed

    Now I become confusing to determine the actual process which ran out of memory. More information: my system doesn't have swap space.

    Any suggestion?

  4. $spacer_open
    $spacer_close
  5. #4
    Just Joined!
    Join Date
    Nov 2004
    Posts
    47
    today, i found the problem on my system. I'd like to update my post.

    When a system runs out of memory, the OOM killer makes a decision about what process to kill base on its own policy. First, it chooses the process which currently owns the most memory to kill. By this policy, it is expected that the process which owns the most memory is the malfunction one. But in my case, even after getting all the remaining available memory of system, the malfunction process is still not a process who held biggest amount of memory, then it happended that the OOM killer killed the process which own most memory, not the malfunction one.

    So, now I know that when system log show "Out of memory: kill process <process id>", it doesn't mean that this <process-id> causes the system runs out of memory.

    Not big
    Any comment?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •