Results 1 to 3 of 3
Thread: network connection too fast ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
network connection too fast ...
When I transfer a large amount of data, somewhere along the line the session is dropped.
In pseudo code here is what I do:
tar cz large_directory | ssh -C remoteServer tar xz
So basically I tar a large directory ( > 1GB ) and push it through an SSH tunnel to another server where I untar it.
I have the same problem when I use rsync -e ssh; but rsync is even quicker with being dropped. Idem for the transfer of the database using mysqldump.
How can I limit the transfer rate on my sending machine? It is a basic Centos 4.5 install, nothing fancy. Suggestions anyone?
You could use the "--rate-limit <RATE>" option of ivarch.com: Pipe Viewer: Online Man Page
As you dont have root, you cannot install it as a package.
So either ask the admin to install it or compile from source.
Other than that, you can use rsync instead of tar.
rsync also allows to limit the bandwidth (--bwlimit=KBPS) and can be used via ssh.
But even if one or the other tool works for you, it just covers up a potential network issue.
The best option would be to investigate and solve the problem.
As a nice side effect, your script will be faster.You must always face the curtain with a bow.
The same as you do can be done with:
$ tar bla $ scp -l <limit in Kbit/s> bla remote:bla $ ssh remote $ tar -x bla
Maybe your connection gets dropped cause the administrator of one server limits the amount of data that is allowed to be transferred through one session? Packing the file into smaller pieces could help in this case:
$ tar -M -L <size in Kbyte> bla $ for i in bla* do; scp -l <limit in Kbit/s> i remote:i; done $ ssh remote $ tar -x -M bla
Last edited by Kloschüssel; 01-23-2012 at 07:27 AM. Reason: added multi volume idea