Find the answer to your Linux question:
Results 1 to 8 of 8
I'm looking for a way to push ~2.8GB over a network to several machines at the same time. It's an image. I've got it figured out and working reliably for ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Linux Engineer Freston's Avatar
    Join Date
    Mar 2007
    Location
    The Netherlands
    Posts
    1,049

    Q. about the default behavior of netcat


    I'm looking for a way to push ~2.8GB over a network to several machines at the same time. It's an image. I've got it figured out and working reliably for 1 recipient at a time. But I need to scale the procedure to serve ... ehm *estimates* anywhere from 7 to 23 clients at the same time.

    I can put my question in the form of multiple choice

    Assume one uses netcat (`nc`) to push ~2.8GB over the network, with multiple recipients, does it:
    A) Push 2.8GB to box1, then proceeds pushing 2.8GB to box2, etc... (serial)
    B) Push n*2.8GB to box(1-n) simultaneously (parallel)
    C) Can be gotten to push the same 2.8GB to box(1-n) simultaneously (broadcast)
    D) crash, stall, burn, explode?
    With box(1-n) I mean box1, box2, box3, etc..
    And n*2.8GB means 7*2.8GB in case of 7 recipients, or 23*2.8GB in case of 23 recipients O_o

    I prefer to use some sort of broadcasting, although I can live with parallel. But if netcat defaults to serial, then I'll have to think of a whole new way to do this. I would test this myself, but I don't have enough physical machines to test this aspect.
    Can't tell an OS by it's GUI

  2. #2
    Just Joined!
    Join Date
    Aug 2007
    Posts
    25
    How about rsync

  3. #3
    Linux Engineer Freston's Avatar
    Join Date
    Mar 2007
    Location
    The Netherlands
    Posts
    1,049
    Thanks, but it's more a matter of scale than a matter of tools. If I hook up, let's say 10, machines to the server it will need to move 10*2.8GB=28GB both on rsync, nc, scp, sshfs, nfs, rcp or whatever means I know of.

    What I'm looking for is a way to just push out 2.8GB and have it received by 10 boxes. But I seem to have been thinking in the wrong terms, considering the wrong tools, and taking the wrong approach on this. Back to the drawing board!


    I've been doing some calculations, and I cannot get around another main bottleneck: juice! I think I'll have a hard time getting juice for ten/twelve machines at a time, and that is taking in advance that I will succeed in getting the whole procedure headless. If I have to power the monitors as well O_o
    So that just put a stripe through the plan of using a 24 port switch. Unless you know some hidden wall outlets in my office (and a place to put all these apparatuses somewhere between the rest of my mess).



    Well, if I can't find a way to do this as above, then I'm falling back to my original plan of using nfs and keep a `nc` channel open only for client status updates. It's ugly and slow, but at least it's gonna work
    Can't tell an OS by it's GUI

  4. $spacer_open
    $spacer_close
  5. #4
    Just Joined!
    Join Date
    Aug 2007
    Posts
    25
    Instead of MOVING the data how about leaving it on one machine and mounting the data. That way its always there and updated. No moving of data. this does assume reasonable local network speeds

  6. #5
    Linux Engineer Freston's Avatar
    Join Date
    Mar 2007
    Location
    The Netherlands
    Posts
    1,049
    The machines have to become stand-alone.

    The network is an ad-hoc build, specifically for pushing generic images to empty machines. There's ~80 of them, hence I feel putting effort in automating the procedure will eventually save me a lot of time. If it was 2 or 3 boxes I'd install them manually or make something less complicated like a liveUSB that can deliver an image. But there are ~eighty of them


    The only thing that is helping is that they all default to netboot if they can't find an OS on any of their disks. This I can use to automate each and every step, keeping the clients headless. I have an initrd offer to them that takes all steps necessary to write the image to disk. This works reliably for one machine at a time, but I want to scale it up.

    netcat does allow UDP broadcast, but I don't trust UDP for purposes of pushing an image where every bit counts. I could run an md5sum on the image after arival, and request a new push if it fails... but I'm afraid that this will only strain the network more. (I might be wrong though)
    Can't tell an OS by it's GUI

  7. #6
    Just Joined!
    Join Date
    Aug 2007
    Posts
    25
    Quote Originally Posted by Freston View Post
    Thanks, but it's more a matter of scale than a matter of tools. If I hook up, let's say 10, machines to the server it will need to move 10*2.8GB=28GB both on rsync, nc, scp, sshfs, nfs, rcp or whatever means I know of.
    If you have 10 boxes and 2.8gb of data regardless of what you move your going to move 28gb. You either mount it or move it. volume is less with mounts, since that doesnt appear to be an option you need to move it one way or another. I synchronize over 200 boxes each night with a 1.5gb of data using rsync and have not had an issue. I stagger my sync over a 3hr period, but it works fine.

  8. #7
    Linux Engineer Freston's Avatar
    Join Date
    Mar 2007
    Location
    The Netherlands
    Posts
    1,049
    Quote Originally Posted by ohgary
    If you have 10 boxes and 2.8gb of data regardless of what you move your going to move 28gb. You either mount it or move it. volume is less with mounts, since that doesnt appear to be an option you need to move it one way or another. I synchronize over 200 boxes each night with a 1.5gb of data using rsync and have not had an issue. I stagger my sync over a 3hr period, but it works fine.
    Well, reliability is more important than speed. I'll give rsync a shot.
    Can't tell an OS by it's GUI

  9. #8
    Just Joined!
    Join Date
    Aug 2007
    Posts
    25
    also you can kick rsync off in 1 or N+1 servers syncs at the same time. your network and servers load will be the determining factor on how many you can sync at the same time. Also does your data need to be snaphotted all at the same time or can you just run rsync hourly and pickup updates throught the day and spread your 2.8gb over 24hrs instead a couple.

    Data needs dictate a lot, but rsync is pretty slick.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •