Find the answer to your Linux question:
Results 1 to 2 of 2
Before I start asking questions and requesting help, I need to give a little info about my setup... I have two servers that I built that I'm in charge of ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Jun 2005
    Posts
    47

    Unhappy NFS: "File size limit exceeded"


    Before I start asking questions and requesting help, I need to give a little info about my setup...

    I have two servers that I built that I'm in charge of administrating. The first server is our primary CAD server which has a 750GB hardware RAID-5 setup, 4GB memory, a dual core processor, and it runs on Fedora 7. The other server is our incremental backup server (I save roughly 1 months worth of incremental backups) which has a 750GB software RAID-0 setup, 2GB memory, a Athlon XP processor, and it also runs on Fedora 7.

    The way things are configured, I have the CAD server connected directly to the network and the backup server hiding behind the CAD server and sharing its internet connection. I share files between the two servers by mapping a NFS share from the backup server onto the CAD server. Every night at a specified time, I have several scripts that run that take care of gathering up a copy of all the files that have changed and copying them over to the backup server. A few hours after the files are copied, the backup server takes over and compresses the files for storage.

    This setup has been working until I started to run into files that were exceeding 2GB in size. When I come across a +2GB file, the backup server complains that the "File size limit exceeded" and it stops the transfer.

    Here are the outputs from ulimit -a and the contents of my limits.conf file:

    ulimit -a output:
    Code:
    [root@backup ~]# ulimit -a
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 49151
    max locked memory       (kbytes, -l) 32
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 10240
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) 49151
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited
    Contents of /etc/security/limits.conf:
    Code:
    #<domain>      <type>  <item>         <value>
    *               hard    fsize           10240000
    Even with the above settings and restarting the NFS server, I still get errors that the files size limit was exceeded. Has anyone else experienced this? Do you know how to fix it?

    Thanks to anyone who responds!

  2. #2
    Just Joined!
    Join Date
    Jun 2005
    Posts
    47
    I'm still not sure what is causing NFS to screw up, but I have found away around this problem by installing Samba on the backup server and mounting a Samba share using mount.cifs. Not only does this get around the 2GB file limit, but the CIFS transfers are 2x faster.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •