Find the answer to your Linux question:
Results 1 to 3 of 3
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    Unusual memory-related problems with Java

    I'm having some unusual issues.

    One of my servers is a 64-bit Debian Squeeze install:
    Kernel: 2.6.32-5-amd64
    Java: 1.7.0_09 (64-bit) - Manually installed, although I have tried the sun-java6-jre package also, with the same issue.

    It has 8Gb of memory and runs Apache and PHP along with a few Java applications. One of these applications is RealObjects PDFReactor, which is a PDF engine which is used to convert HTML pages to PDF documents. Recently, it's been necessary to convert very large documents and so after encountering memory limits, I've increased the amount of memory allocated to PDFReactor by passing '-Xmx1G' to the Java process. The service starts, but other processes then have issues related to memory, such as:
    • PHP runs out of memory when anywhere from 512k to several megabytes have been allocated (nowhere near our memory limit)
    • When running simple commands from within PHP, such as 'mkdir', I see the error 'exec(): Unable to fork [mkdir in ...]'
    • PHP XSLT Processor 'parser error : out of memory error'

    You would think that the server would have no memory available, but that isn't true. A typical result when running 'free -m' is:
                 total       used       free     shared    buffers     cached
    Mem:          8004       7593        411          0          0       4389
    -/+ buffers/cache:       3203       4801
    Swap:         1905          0       1905
    I can see that only 411Mb is 'free', but 4389Mb is cached, so that memory is effectively also free? As far as I can see, the system has over 4Gb of 'free' memory, but I keep receiving memory-related errors.

    The problem is quite evident when trying to run Java with particular memory allocations. For example:
    With 512Mb allocated
    ~# free -m
                 total       used       free     shared    buffers     cached
    Mem:          8004       7608        395          0          0       4390
    -/+ buffers/cache:       3218       4786
    Swap:         1905          0       1905
    ~# java -Xms16m -Xmx512m -version
    java version "1.7.0_09"
    Java(TM) SE Runtime Environment (build 1.7.0_09-b05)
    Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode)
    With 1Gb allocated
    ~# java -Xms16m -Xmx1G -version
    Error occurred during initialization of VM
    Unable to allocate bit map for parallel garbage collection for the requested heap size.
    Error: Could not create the Java Virtual Machine.
    Error: A fatal exception has occurred. Program will exit.
    So when trying to run Java with 1Gb of allocated heap, it fails. Even though the system effectively has 4Gb of available memory.

    Oddly, I have another server, with an almost identical set up (same version of Debian, same Kernel, same Java and same PDFreactor) and that installation never has issues.

    Does anyone have any clue as to why these problems are occurring or what I can do to resolve them?
    Last edited by sirkent; 10-24-2012 at 07:04 PM.

  2. #2
    Linux User cheesecake42's Avatar
    Join Date
    Jan 2007
    Orlando, FL
    I ALWAYS have issues with memory when running java based servers. Particularly Confluence's wiki and ticketing system products. My personal solution is to try to stay away from java based systems as much as possible. That doesn't always work though lol.

    I think you're on the right track editing the heap size. You have to find a sweet spot where you've got enough for your java applications to run correctly without stealing memory from other non-java applications. I would even try running your java applications on their own system if at all possible.

  3. #3
    I have resolved the issue!

    It appears to be a bug in the kernel when vm.overcommit_memory is set to 2.

    This file contains the kernel virtual memory accounting mode. Values

    0: heuristic overcommit (this is the default)
    1: always overcommit, never check
    2: always check, never overcommit

    In mode 0, calls of mmap(2) with MAP_NORESERVE are not checked, and the
    default check is very weak, leading to the risk of getting a process
    "OOM-killed". Under Linux 2.4 any nonzero value implies mode 1. In
    mode 2 (available since Linux 2.6), the total virtual address space on
    the system is limited to (SS + RAM*(r/100)), where SS is the size of
    the swap space, and RAM is the size of the physical memory, and r is
    the contents of the file /proc/sys/vm/overcommit_ratio.
    I had overcommit_memory set to 2, which should prevent the kernel from allowing programs to allocate more memory than exists on the system. However, it appears that there is some sort of bug in the kernel that sometimes causes this to fail and, randomly, as I experienced, the kernel thinks that there is no free memory to allocate.

    I have reset this back to '0' and now the problem has gone.

  4. $spacer_open

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts