Find the answer to your Linux question:
Results 1 to 3 of 3
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined! jippie's Avatar
    Join Date
    May 2006
    Eindhoven, the Netherlands

    network connection too fast ...

    Yes, contrary to most people, I am complaining that the connection between two servers is too fast (1Gbps). I administer neiter of these servers, I only have unprivileged accounts on these machines.

    When I transfer a large amount of data, somewhere along the line the session is dropped.

    In pseudo code here is what I do:

    tar cz large_directory | ssh -C remoteServer tar xz

    So basically I tar a large directory ( > 1GB ) and push it through an SSH tunnel to another server where I untar it.

    I have the same problem when I use rsync -e ssh; but rsync is even quicker with being dropped. Idem for the transfer of the database using mysqldump.

    How can I limit the transfer rate on my sending machine? It is a basic Centos 4.5 install, nothing fancy. Suggestions anyone?

  2. #2
    Trusted Penguin Irithori's Avatar
    Join Date
    May 2009
    You could use the "--rate-limit <RATE>" option of Pipe Viewer: Online Man Page
    As you dont have root, you cannot install it as a package.
    So either ask the admin to install it or compile from source.

    Other than that, you can use rsync instead of tar.
    rsync also allows to limit the bandwidth (--bwlimit=KBPS) and can be used via ssh.

    But even if one or the other tool works for you, it just covers up a potential network issue.
    The best option would be to investigate and solve the problem.
    As a nice side effect, your script will be faster.
    You must always face the curtain with a bow.

  3. #3
    Linux Engineer Kloschüssel's Avatar
    Join Date
    Oct 2005
    The same as you do can be done with:

    $ tar bla
    $ scp -l <limit in Kbit/s> bla remote:bla
    $ ssh remote
    $ tar -x bla
    which is what you would do by hand. I don't know if your idea works, but piping lots of binary data through a ssh tunnel from one ps to the other feels odd to me.

    Maybe your connection gets dropped cause the administrator of one server limits the amount of data that is allowed to be transferred through one session? Packing the file into smaller pieces could help in this case:

    $ tar -M -L <size in Kbyte> bla
    $ for i in bla* do; scp -l <limit in Kbit/s> i remote:i; done
    $ ssh remote
    $ tar -x -M bla
    Last edited by Kloschüssel; 01-23-2012 at 06:27 AM. Reason: added multi volume idea

  4. $spacer_open

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts