Find the answer to your Linux question:
Results 1 to 2 of 2
**************************************** Problem Description **************************************** I am programing in an embedded linux. I have problem in which the core-file which is created by a crash process is corrupted because Memory size ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Jul 2008
    Posts
    3

    core file size problem


    ****************************************
    Problem Description
    ****************************************
    I am programing in an embedded linux.
    I have problem in which the core-file which is created by a crash process is corrupted because Memory size process is bigger than the available space in the directory in which the core file is created as can be seem from the above:

    Swap: 0k av, 0k used, 0k free 152420k cached

    PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
    540 root 9 0 233M 99M 2936 S 0.1 9.5 0:00 0 syslogd
    (crashed procces info)


    and the FS allocation of the core-file directory is :
    admin_0 12 $ df
    Filesystem 1k-blocks Used Available Use% Mounted on
    none 215040 46804 168236 22% /core

    during the crash the memory allocation is:
    Filesystem 1k-blocks Used Available Use% Mounted on

    /
    none 215040 215040 0 100% /core
    (the directory is full)


    So from the above analysis it can be conclude that the core file suppose to be bigger then the free memory in the directory and a corrupted file is created.

    I have no more disk memory so increasing the directory size is not an option.
    I was thinking in dividing the core file in some files and zipped via " tar -cvzf" in order to prevent the above problem. The zipped files are much smaller ~ 0.5M .
    I think there is a lot of junk information in those files. Anyone knows a way to create a compacted core file in which I will be able to run at least the "backtrace" in gdb?

    Are anyone familiar with the above situation? What do you think about mine solution? This is hard to be implemented? Anyone deals with the code which create core files?

    I will be glad for any suggestion to solve my problem.

    Thanks a lot,
    Jose

  2. #2
    Just Joined!
    Join Date
    May 2008
    Posts
    55
    It looks like in debugging you ned only backtrace.
    your process must be allocating huge memory or mallocing huge memory.
    I suggest, write a kernel patch where modify elf_coredump function (I am not sure about function name).
    when coredump happens, it dump process image...
    so change it not to dump heap section..
    which will reduce the size of core.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •