Find the answer to your Linux question:
Results 1 to 5 of 5
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    Exclamation how to optimize samba performance?


    I have installed a linux server, centos gui server with Raid1, his role will be only a file server. I installed samba and configured it. The file system is EXT3. the server got 4 gigs of memory.
    There is a program in windows that writes files to the share.

    I searched and i found that there is some commands that you add and they boost the performance. This is what i added to smb.conf

    refresh = 1
    socket options = TCP_NODELAY IPTOS_LOWDELAY
    read raw = No
    write raw = Yes
    max xmit = 131072
    use sendfile = Yes
    dead time = 15
    getwd cache = Yes
    otlocks = No
    strict allocate = yes

    I got complains that the writes are not fast enough. Before me another IT guy configured the file server and it was faster, what he told my customer that he changed some command of cache but i dont know exactly what he did. I have been asked to find how to boost the performance.

    Please give me more commands to try to boost the performance of Samba or tell me if i need to change anything..

    Thanks A Lot.
    Last edited by MikeTbob; 01-09-2011 at 01:57 PM. Reason: Added Code Tags

  2. #2
    Raid 1 is not going be fast enough and you need a gigabit network to handle the transfer. Raid 1 will read fast but write like a single drive. This is good for a small or home network with 2-3 people but with office of 10 or more you will choke the drive. 100Mbit connection will not handle more then one person. The more people that connect to your fileserver the lower the transfer rate is per person.

    High read and write speeds can be done, but the raid array needs to be at least a raid 0+1 or 5 or 10. You'll also need a good dedicated PCI-E 4, 8, 16X or PCI-X 64bit sata raid controller. You don't even have to use raid on the controller. You can use linux mdadm to raid them, some of the new motherboards are now using pci-e bus for their onboard sata but you will have to test them to double check since some motherboards still use the old pci bus.

    To test your server hardware,
    you can use hdparm to test your drives.

    This command to test the read and write speed of your drives or raid array.
    Be sure to be in the mounted directory of your array or drive when you excute this command, otherwise it will write to the root drive or what mounted drive for the current directory you're in.
    dd if=/dev/zero of=output.img bs=8k count=256k
    iperf to test the network between a client and server.

    As for samba not much you can do except
    socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=65536 SO_SNDBUF=65536
    aio read size = 1
    aio write size =1
    As for your network everyone has to be connected at gigabit speed other wise you will not see more then 11MB/s in transfers

    You will have to tweak the ipv4 network settings on server.

    Last but not least, the client end needs to be up to snuff, with a modern computer and one 7200RPM 500GB sata drive, you can see a samba transfer average about 50-80MB/s across the network higher if the client is in a raid 0 or better.

    If they are using windows XP or lower the maximum transfer rate will always be 22MB/s due to Microsoft not updating the smb protocol. Vista and up will have the new smb protocol allowing to exceed that the 10/100mbit limit.

    I have fileserver at home with gigabit network and I couldn't achieve nothing more than 30MB/s until I upgraded and bought 6x 7200rpm 1TB samsung drives,
    an Intel 2port 8087 SAS/SATA controller and 2x adaptec 8087 to 4 fan out sata cables, and p5n-d and Celeron 2.5ghz dual core and 2GB of ram. The network is mostly gigabit via cat6, using pfsense as my firewall and dhcp server, connected to one D-link DGS-1024D gigabit switch.

    The server uses CentOS 5.4 as it's O/S.

    Between fileserver and my gaming desktop which only has one 7200rpm 500GB drive. I managed to achieve average 70-80MB/s with a peak of 95MB/s.

    Between my fileserver and my dad's computer, which has one 7200rpm 1TB seagate drive, it peaks at 120MB/s with an average of 90-100MB/s.

    Hope this helps.


  3. #3
    this is the results i got. what do you think ?

    [root@lm2000 ~]# hdparm -tT /dev/sda
    Timing cached reads: 1900 MB in 2.00 seconds = 950.94 MB/sec
    Timing buffered disk reads: 156 MB in 3.03 seconds = 51.52 MB/sec
    [root@lm2000 ~]# hdparm -I /dev/sda | grep -i speed
    * SATA-I signaling speed (1.5Gb/s)
    * SATA-II signaling speed (3.0Gb/s)
    Last edited by MikeTbob; 01-09-2011 at 01:58 PM. Reason: Added Code Tags

  4. $spacer_open
  5. #4
    Disk read is low. 51MB/s is pretty slow, combine that with samba you could only get 22-30MB/s over gigabit connection, then writing to it would be worse, like 10 to 15MB/s.

    May I ask what you are going to use this samba server for?

    These are my results.
    This is what I get for my 6x 1TB raid 5
    [admindan@fileserver ~]# hdparm -tT /dev/md0
     Timing cached reads:   3248 MB in  2.00 seconds = 1623.70 MB/sec
     Timing buffered disk reads:  642 MB in  3.01 seconds = 213.50 MB/sec
    Now the write test on the array.
    [admindan@fileserver storage]# dd if=/dev/zero of=output.img bs=8k count=256k
    262144+0 records in
    262144+0 records out
    2147483648 bytes (2.1 GB) copied, 19.4136 seconds, 111 MB/s
    If I raid 0 all the drives, I would max out at 585MB/s due to each port being only 300MB/s.
    It was fun but I needed redundancy with the right amount of space. I could have done raid 6 and got 4TB or raid 0 and got 6TB plus performance.

    admindan@fileserver storage]# mdadm --detail /dev/md0
            Version : 0.90
      Creation Time : Mon Jun 28 05:00:36 2010
         Raid Level : raid5
         Array Size : 4883811840 (4657.57 GiB 5001.02 GB)
      Used Dev Size : 976762368 (931.51 GiB 1000.20 GB)
       Raid Devices : 6
      Total Devices : 6
    Preferred Minor : 0
        Persistence : Superblock is persistent
        Update Time : Mon Jan 10 20:42:52 2011
              State : clean
     Active Devices : 6
    Working Devices : 6
     Failed Devices : 0
      Spare Devices : 0
             Layout : right-asymmetric
         Chunk Size : 256K
               UUID : ea8afac1:3a3a8b7c:af08b018:4be1f1af
             Events : 0.142
        Number   Major   Minor   RaidDevice State
           0       8        1        0      active sync   /dev/sda1
           1       8       17        1      active sync   /dev/sdb1
           2       8       33        2      active sync   /dev/sdc1
           3       8       49        3      active sync   /dev/sdd1
           4       8       97        4      active sync   /dev/sdg1
           5       8      113        5      active sync   /dev/sdh1
    My samba config

    workgroup = workgroup
    server string = File Server
    security = SHARE
    log level = 0
    max log size = 50
    socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=65536 SO_SNDBUF=65536
    read raw = yes
    write raw = yes
    aio read size = 1
    aio write size = 1
    use sendfile = Yes
    acl check permissions = no
    This is my sata controller - Intel SASUC8I PCI-Express x8 SATA SAS Serial Attached Controller Card

    It uses 2 x SFF-8087 mini-SAS equip with this - Adaptec 2236600-R mini SAS x4 SFF-8087 to x1 Serial ATA fan-out Cable - 0.5M

    I don't use the hardware raid on the controller since it doesn't support raid 5. I use mdadm to raid my devices.

    Another thing I forgot to mention was filesystems.

    Filesystems makes a big difference in performance. I use ext4 with ordered-data mode for the array then have 3 addtional drives each formatted with ext3 with journal_data and using rsync in cron job to do weekly backups.

    If I had journal_data mode on my array, the performance goes in to sink. Samba transfers go from the nice 50-80MB/s across the network to measly 15MB/s.

    Hope this helps.
    Last edited by Mad Professor; 01-11-2011 at 01:20 AM.

  6. #5
    Google: optimizing samba performance

    A site reported experiencing very baffling symptoms with MYOB Premier opening and accessing its data files. Some operations on the file would take between 40 and 45 seconds.

    It turned out that the printer monitor program running on the Windows clients was causing the problems. From the logs, we saw activity coming through with pauses of about 1 second.

    Stopping the monitor software resulted in the networks access at normal (quick) speed. Restarting the program caused the speed to slow down again. The printer was a Canon LBP-810 and the relevant task was something like CAPON (not sure on spelling). The monitor software displayed a "printing now" dialog on the client during printing.

    We discovered this by starting with a clean install of Windows and trying the application at every step of the installation of other software process (we had to do this many times).

    Moral of the story: Check everything (other software included)!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts