Find the answer to your Linux question:
Results 1 to 2 of 2
I'm just setup NFS over a gigabit network (1000Mbps) and am getting poor performance. I am getting 27MB/sec for reads but only 5MB/sec for writes. The theoretical maximum for gigabit ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Apr 2006
    Posts
    3

    Poor NFS Write Performance, System Hangs


    I'm just setup NFS over a gigabit network (1000Mbps) and am getting poor performance. I am getting 27MB/sec for reads but only 5MB/sec for writes. The theoretical maximum for gigabit interface is 125MB/sec so in reality I would expect to be able to get closer to 50-75MB/sec in both directions.

    I tested the speed using:
    time dd if=/dev/zero of=/mnt/ddtest bs=16k count=1638400
    time dd if=/mnt/ddtest of=/dev/null bs=16k
    I tried various settings for rsize/wsize (131072, 65563, 32767, 8192) with little change (actually degraded slightly with each smaller size). The default 128K (131072) seems to be the best. Larger amounts defaulted back to 128K so I assume that is the maximum supported value.

    I performed the same test locally on the client and server machines and achieved hard drive read/write rates in the 70MB/sec range so the bottle neck seems to be the network.

    With scp I get 14MB/sec on both read and write. I'm not surprised the scp read rate is lower than NFS due to potential ssh overhead, but was happy to see that the read and write rates were the same as I thought they should be since my network is full duplex.

    In addition to the slow write rate, my server seems to hang intermittently under NFS stress (dd read of 25G files ) requiring reboot to recover. Unsure where to look for messages for any hints as to why.

    I'm running
    nfs-utils 1:1.1.0-6.fc8
    nfs-utils-lib.i386 1.1.0-4.fc8

    Any ideas?

  2. #2
    Linux Guru
    Join Date
    Nov 2007
    Posts
    1,762
    With scp I get 14MB/sec on both read and write.
    This is slow. SSH processing on today's CPU's does not take a GBit network down to this speed. You can use iperf to do more network testing - it's a powerful tool.

    happy to see that the read and write rates were the same as I thought they should be since my network is full duplex.
    Duplex has no affect on sending and receiving tests. If I run a "send" test, let it complete, and then run a "receive" test, I only have large traffic amounts going one direction at a time. Duplex involves sending/receiving *at the same time.*

    You'll need to do more research to understand the ways that a system will buffer network transfers. If I send 100MB to another machine, it will easily fit in memory and the receiving machine says "data received." But when the buffer runs out and the system has to start sending data to HDD, the speed can be affected greatly.

    The fast the system hangs also points to a driver/HW problem.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •