Find the answer to your Linux question:
Results 1 to 7 of 7
Hi all, When we build a big share memory with 'shmget' and 'shmat' functions, we have no problem in RedHat linux. In Fedora 4 linux we get "Segmentation Fault" error ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Feb 2006
    Posts
    1

    Unhappy Share Memory Problem


    Hi all,

    When we build a big share memory with 'shmget' and 'shmat' functions, we have no problem in RedHat linux.
    In Fedora 4 linux we get "Segmentation Fault" error with the same program. we must set
    "ulimit -s unlimited" in every terminal console that we open.
    Can anyone help me how can correct this problem permanently in fedora linux?


    Thank you in-advance.

  2. #2
    Linux Guru
    Join Date
    Oct 2001
    Location
    Täby, Sweden
    Posts
    7,578
    Maybe this is just me, but if you have to unlimit the stack, then I'd say that the problem you should solve isn't how to unlimit the stack, but rather to not allocate all that much memory from the stack.

  3. #3
    Just Joined!
    Join Date
    Jan 2006
    Location
    India
    Posts
    52
    Hi reza,

    If I understood you, Will you check out the 'shared memory' range (limits) set in your system. Try below command:
    #cat /proc/sys/kernel/shmmax

    I still wonder how did that 'ulimit -s unlimited' solved your problem.

    And did you find out the why your programm is crashing.
    If it is ok with you please paste the code.

  4. #4
    Linux Guru
    Join Date
    Oct 2001
    Location
    Täby, Sweden
    Posts
    7,578
    Quote Originally Posted by rajeshk
    I still wonder how did that 'ulimit -s unlimited' solved your problem.
    Of course, I can only conjecture as to what the problem is, but the -s option to ulimit controls the stack limit, which is normally set to 2k pages (8 MiB). If such a limit is set, and a process tries to use more stack than that, it is sent a SIGSEGV. Thus, a plausible explanation for why his program crashes with a segmentation fault normally, and does not once he has uncapped the stack, is that he allocates more than 8 MiB of memory on the stack. Whether this is by deep recursion, large frame-local variables or by excessive usage of alloca(), I don't know, but as I stated, I would suggest he would look into that rather than looking into how to permanently uncap the stack.

  5. #5
    Just Joined!
    Join Date
    Jan 2006
    Location
    India
    Posts
    52
    Hi Dolda,

    You are right, the programm crashes once it crosses the stack limit, set in ulimit.
    And i also tested it, but i was shocked when i set stack size to 10, then the programm recives sigsegv at COMPILATION which i never seen it till now.

    Please try it...

  6. #6
    Linux Guru
    Join Date
    Oct 2001
    Location
    Täby, Sweden
    Posts
    7,578
    It's not very strange at all. It's not the program that gets a SIGSEGV, but the compiler. If you set the stack limit to 10 kiB, then it's not very strange that most programs you try to run will crash from overrunning that limit (especially the compiler, which will probably recurse rather deeply).

  7. #7
    Just Joined!
    Join Date
    Jan 2006
    Location
    India
    Posts
    52
    I neglected 'cc' is also an executable. So the "ulmit -s" is also limits the stack size of 'cc' command.

    Thanks Dolda.

    -
    rajesh

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •