NFS server performance
I need help with NFS optimisation.
Used mount option:
How can I debug NFS and pin point its weak spot (config, networ etc...)?
Or maybe someone can suggest some paramaters which should be changed for my heavy load system.
mount -o rw,bg,hard,nointr,tcp,vers=3,timeo=2,retrans=10,rs ize=32768,wsize=32768 ip:/directory /directory
but also based on what storage u use
TNX NixSavy for replay.
I need to do only read only.
Correct me please if I mistaken:
hard - Is used to create more stable link between client and server. (It will try to reconnect and regain it`s previous state)
timeo- and retrans- limits the hard mount because it will repeatedly retry to contact the server. (if it crashes or delays)
nointr- means don't allow file operations to be interrupted.
timeo- it`s stands for timeout tenths of second... whats is this parameter purpose?
retrans- retransmissions ... after x transmissions show error...
Last two configures server tolerant level... ?
I have about 250 concurrent sessions on NFS. Thread limit on NFS server is calculated 2x250+reserve
Client rpc stats:
calls retrans authrefrsh
76691666 14 0
Storage is used to access video files for playback in flash player.
NixSavy these mount options are taken from your experience (tests etc..) or find in literature (google)?
Based on the description of your load, I'd say it's network and disk.
the one I pasted is working one but can only consider as example.
need to consider your environment before settings the parameter.
May be later today I come back with more details. Bit busy right now :-)
SOrry for the delay. Got busy all day.
Except rsize and wsize parameter , all other settings enable client behaviour when server(nfs) dont respond or not accessible.
if for some reason conection lost , then user cannot terminate the process waiting for the nfs communication.
its related to hard. and advised to use it. If you use the intr option in conjuction with a hard mount, any signals received by the process interrupt the NFS call so that users can still abort hanging file accesses and resume work.
sets the time (in tenths of a second) the NFS client will wait for a request to complete. The default value is 7 (0.7 seconds). What happens after a timeout depends on whether you use the hard or soft option. can increase the value if nfs server if far beyond hops.
rsize and wsize
this will speed up NFS communication for reads and writes by setting a larger data block size, in bytes, to be transferred at one time. Some old kernel and network card dont support this. NFSv2 or NFSv3 the default values for both parameters is set to 8192.
NFSv4 the default values for both parameters is set to 32768.
Mount the NFS filesystem using the TCP protocol instead of the default UDP protocol. supported for 2.4 and 2.5 kernels.
advantage is that dropped packet can be retransmitted. Some nfs servers support udp only.
Performance considerations if required
1)turn off auto negotiation on network cards
2)thing of increasing nfsd instances
3)increase memory limits in proc
Ok, I think I have found my bottleneck... its network... :(
From storage with 10Gbits is sending to streaming server - it have only 1Gbits. And on switch there are discards. :(
Can I limit NFS server transfer speed per export :) ?