Find the answer to your Linux question:
Results 1 to 3 of 3
Hi all (and apologies if this isn't the right forum to be posting in), I run a Debian Lenny machine, on what I feel is reasonable hardware: 2GB ram, dual ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Mar 2007
    Posts
    19

    Something periodically thrasing my machine


    Hi all (and apologies if this isn't the right forum to be posting in),

    I run a Debian Lenny machine, on what I feel is reasonable hardware: 2GB ram, dual core processor, SATA disks. It's main use is as my desktop machine, but it also runs Apache and MySQL for a few low traffic websites.

    Every couple of days or so, something goes wrong. The machine pretty much freezes up for a few minutes, and the disks can be heard churning away none stop. After 5 or 10 min, it recovers, and I usually find that Firefox has been killed because the system was out of memory.

    My guess is that the machine is starting to swap memory, hence the high disk IO. When Firefox is killed, this frees up enough RAM for the machine to swap back into memory.

    I'm having trouble finding what is causing this though. Usually you can tell when you're running out of RAM, but this happens suddenly. I can't prove that it's Firefox (although I've disabled all plugins - no change). Other possible culprits (that I can think of) - MySQL, cron jobs (although there's no regular time at which the problem occurs).

    During these problems, it's hard to do much. But I have looked at the process list (nothing using much CPU or RAM), and vmstat/iostat. They show high I/O on my primary disk (containing all the OS: var, home etc), but that doesn't get me anywhere nearer to solving the problem.

    I tried moving /var onto a different disk (since this is the dir websites and MySQL are running out of), but that didn't solve it.

    Is there a way I can see which processes are responsible for the high IO ?

    I know the easy answer is to just buy more RAM (or maybe not, if something is spiraling out of control, it might just eat all the RAM, no matter how much I had), but I'm not really a power user, and feel what I've got should be enough. Here's the output of 'free -m' at the moment (pretty normal conditions):

    Code:
                 total       used       free     shared    buffers     cached
    Mem:          2014       1251        762          0          7        140
    -/+ buffers/cache:       1102        911
    Swap:            0          0          0
    Any other suggestions for tracking this down please? I have a pretty similar setup on a machine with half the CPU and RAM, and don't have this problem

  2. #2
    Linux Newbie
    Join Date
    Jan 2008
    Location
    Canada
    Posts
    109
    Quote Originally Posted by kerm1t View Post
    Hi all (and apologies if this isn't the right forum to be posting in),

    I run a Debian Lenny machine, on what I feel is reasonable hardware: 2GB ram, dual core processor, SATA disks. It's main use is as my desktop machine, but it also runs Apache and MySQL for a few low traffic websites.

    Every couple of days or so, something goes wrong. The machine pretty much freezes up for a few minutes, and the disks can be heard churning away none stop. After 5 or 10 min, it recovers, and I usually find that Firefox has been killed because the system was out of memory.

    My guess is that the machine is starting to swap memory, hence the high disk IO. When Firefox is killed, this frees up enough RAM for the machine to swap back into memory.

    I'm having trouble finding what is causing this though. Usually you can tell when you're running out of RAM, but this happens suddenly. I can't prove that it's Firefox (although I've disabled all plugins - no change). Other possible culprits (that I can think of) - MySQL, cron jobs (although there's no regular time at which the problem occurs).

    During these problems, it's hard to do much. But I have looked at the process list (nothing using much CPU or RAM), and vmstat/iostat. They show high I/O on my primary disk (containing all the OS: var, home etc), but that doesn't get me anywhere nearer to solving the problem.

    I tried moving /var onto a different disk (since this is the dir websites and MySQL are running out of), but that didn't solve it.

    Is there a way I can see which processes are responsible for the high IO ?

    I know the easy answer is to just buy more RAM (or maybe not, if something is spiraling out of control, it might just eat all the RAM, no matter how much I had), but I'm not really a power user, and feel what I've got should be enough. Here's the output of 'free -m' at the moment (pretty normal conditions):

    Code:
                 total       used       free     shared    buffers     cached
    Mem:          2014       1251        762          0          7        140
    -/+ buffers/cache:       1102        911
    Swap:            0          0          0
    Any other suggestions for tracking this down please? I have a pretty similar setup on a machine with half the CPU and RAM, and don't have this problem
    Hi
    You may try setting up a swap space that is at least twice your Ram. mkswap should do this for you. Another issue it could be is that you do not have enough Disk space to allow for swapping. I suspect based on your CODE insert that a Swap partition should solve your problem. Cheers...

  3. #3
    Just Joined!
    Join Date
    Aug 2009
    Posts
    76
    Indeed, you could allocate a small amount of swap space (like perhaps 1GB) to give you a little bit of a buffer to catch a process which is spiraling out of control. The way I would do it is to use gparted to shrink an existing partition by 1GB, create a 1GB partition formatted as swap, and then doing 'sudo swapon /dev/sdXY',

    If you run 'top' or htop periodically, you can pick up on any processes which have a memory leak or are using an abnormally high amount of CPU (use the shift+ '<' '>' keys to change the sorting order within top), if that is indeed the case. Sometimes as well, you can just CTRL+ALT+F1 to a terminal and 'sudo killall -9 processname' in order to kill a process to free up some ram in order to do a better diagnosis (since even the tty will run like **** if you are out of memory). You could also try restarting the X-server using ctrl+alt+backspace to do the same thing since that will also kill all graphical processes.

  4. $spacer_open
    $spacer_close

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •