Find the answer to your Linux question:
Page 1 of 2 1 2 LastLast
Results 1 to 10 of 12
We are a non-profit company that recycles corporate computers and electronic waste and then refurbishes the better gear for schools and families. We typically image an array of non-homogeneous machines ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Jun 2008
    Posts
    2

    Question Fast image deployment?


    We are a non-profit company that recycles corporate computers and electronic waste and then refurbishes the better gear for schools and families. We typically image an array of non-homogeneous machines with an XP Pro install sealed with sysprep. Today we received a request to prepare a good truckload of machines loaded with Linux.

    Keeping the non-homogeneous nature of our hardware in mind, what options do we have for a rapid deployment using Ubuntu?

    One thing I should add is that all the systems will have 20GB HDDs and 512MB of RAM.

  2. #2
    Linux Guru
    Join Date
    Nov 2007
    Posts
    1,759
    Due to the non-homogeneous nature of your HW, I think you are looking at an install run over the network. Most Linux network installs let you create an "answer" file that automatically configures your partitioning, package selection, etc. so that you boot off a CD/PXE, start the install, and then let it go.

    Ubuntu's Installation Doco

  3. #3
    Just Joined!
    Join Date
    Jun 2008
    Posts
    2
    Looks like automated deployment is the way to go. I've had no luck finding documentation for this at Ubuntu's support site.

  4. $spacer_open
    $spacer_close
  5. #4
    Linux Guru
    Join Date
    Nov 2007
    Location
    Córdoba (Spain)
    Posts
    1,513
    As long as it's hardware that is supported by the Ubuntu kernel, there's no problem with the machines not being indentical. The kernel will identify the hardware and load the appropiate modules for each device.

    If the disks are identical, you can use the command dd to make a full image and then deploy that disk image on each machine.

    Code:
    dd if=/dev/hda of=/mnt/whatever/backup.img
    This assumes that hda is your hard drive, and you mounted a storage device under /mnt/whatever/ AND this should only be done while no drive on hda is mounted, otherwise you can get a corrupted image. To dump it, on the target machines you do:

    Code:
    dd if=/mnt/whatever/backup.img of=/dev/hda
    However, this one problem: the disk images will be as big as the disk itself. My advise would be to just tar the whole drive this way:

    Code:
    tar -cvjpf /mnt/whatever/backup.tar.bz2 --exclude=/mnt/hda/mnt/* --exclude=/mnt/hda/tmp/* --exclude=/mnt/hda/var/tmp/*
    This assumes you mounted hda read only (otherwise, just like with dd, you can get a corrupted tarball). You can exclude as many locations as needed. Note also that if you do this, you will also need to install the bootloader manually (usually running lilo or grub-install /dev/hda), unlike with dd.

    You could use dd to install it, though:

    Code:
    dd if=/dev/hda of=/mnt/whatever/bootsect.img bs=512
    And this to put it on the rest of the machines:

    Code:
    dd if=/mnt/whatever/bootsect.img of=/dev/hda

  6. #5
    Linux Guru
    Join Date
    Nov 2007
    Posts
    1,759
    After pushing images *thousands* of times with Windows, Linux, Solaris, AIX, and HP-UX, my experience has been that this does not work well. As soon as there is a small change in the disk controller, it will usually not boot. And then if it does boot, there is usually additional reconfiguration needed for the display, NIC, and wireless (at the least.)

    If this worked well, disaster recovery would be as easy as "restore the backup image to any HW available", which is not the case.

    The "autodetect HW" sequence that is run when you have a LiveCD is not the same as when the OS is installed to the HDD. Once installed locally, the config is tailored to the existing HW, like all OS'es.

    Once you finish the "imaging" and getting the OS fully working, you would have spent less time/pain just running an automated network install - and it would be a "cleaner" install.

    My .02

    Edit: I could see "fighting" with imaging if A) the HW is *really* similar and B) you have many customized apps installed that take considerable time to install and set up correctly.

  7. #6
    Linux Guru
    Join Date
    Nov 2007
    Location
    Córdoba (Spain)
    Posts
    1,513
    Quote Originally Posted by HROAdmin26 View Post
    After pushing images *thousands* of times with Windows, Linux, Solaris, AIX, and HP-UX, my experience has been that this does not work well. As soon as there is a small change in the disk controller, it will usually not boot.
    As long as your chipset is supported, that shouldn't matter, as long as you build the support for ALL of them into your kernel statically, not as modules.

    And then if it does boot, there is usually additional reconfiguration needed for the display, NIC, and wireless (at the least.)
    It depends really on the distro. Most distros can autoconfigure the nics at startup. You will need to reconfigure the display only if you are not going to use vesa. Vesa should just work.

    If this worked well, disaster recovery would be as easy as "restore the backup image to any HW available", which is not the case.
    It IS the case. It's a common operation that I do lots of times, though I don't use disk images, I use a compressed tarball instead and then I reinstall grub. In fact, this is the standard procedure to defrag a partition under linux (though in that case the hardware will not change).

    The "autodetect HW" sequence that is run when you have a LiveCD is not the same as when the OS is installed to the HDD.
    That entirely depends on the distro and how you configure it.

    Once installed locally, the config is tailored to the existing HW, like all OS'es.
    Not usually. Most distros use prebuilt kernels with EVERYTHING enabled. So, as long as the hardware is supported, it should be painless and easy to change it. You will have to reconfigure your xorg.conf if you are going to use 3d acceleration or the like, though.

    Once you finish the "imaging" and getting the OS fully working, you would have spent less time/pain just running an automated network install - and it would be a "cleaner" install.
    I don't think it's any cleaner. Unlike windows (which horribly breaks if something change) linux is just a collection of files. And the same files will be installed regardless of your hardware. It's up to the kernel to detect the hardware and load the correct modules, amongst those that are on the disk. By default, most linux OSes ship kernels with ALL the drivers, and the kernel loads them on-demand.

    It's not quite as hard as you seem to think.

  8. #7
    Linux Guru
    Join Date
    Nov 2007
    Posts
    1,759
    I work with RHEL and SLES (mainly) day in and day out. This is NOT the case. Many times the HDD controller you need must be in the initrd or the system can't boot. If you change systems, you must rebuild the initrd. (Been there, done that.)

    All of your answers are "it depends on how you set it up" or "it depends on the distro." We are talking Ubuntu here and "out of the box" setup, not some customized kernel with everything compiled in kernel. *Most* things that can be compiled as modules, are done as modules by the distros.

    Reconfiguring your X, etc. leaves multiple backup copies everytime it's edited by a GUI util, which means you have these types of files to cleanup/delete. I have run into situations where an existing config file simply would not reconfig correctly using a GUI util and needed to be deleted/recreated with a clean base to get working correctly. Again, more manual steps and knowledge required and more garbage left behind.

    It IS the case. It's a common operation that I do lots of times, though I don't use disk images, I use a compressed tarball instead and then I reinstall grub. In fact, this is the standard procedure to defrag a partition under linux (though in that case the hardware will not change).
    If you do this, and something is needed in the initrd, the system will not boot. Once again, this requires specific knowledge of what's in the image and what HW is in the machine being imaged. There are other problems that can arise as well.

    It's not quite as hard as you seem to think.
    I am in the process of building a menu-driven PXE boot system to bounce systems to the appropriate boot server (RIS, Ignite, Jumpstart, Kickstart, etc.) Some of these can do imaging AND network installs and I'm familiar with the many ways each can go wrong.

  9. #8
    Linux Guru
    Join Date
    Nov 2007
    Location
    Córdoba (Spain)
    Posts
    1,513
    Just don't use initrd's. And if you do, build the support for chipsets statically as I said, and everything will be alright. Nothing stops you from rebuilding your kernel in ubuntu, there's no black magic involved, and you can recycle the config from a standard kernel from /proc/config.gz and just change the relevant bits and recompile.

    Either way, nothing stops you from putting ALL the drivers in the initrd. I don't know how ubuntu does that, but, indeed, if it only puts one driver on the initrd, then it's just another example of another thing that ubuntu does in a bad way.

    PS: I am a Gentoo user and my current installation has been the same for a couple of years, since I migrated to 64 bits (and only BECAUSE I migrated to 64 bits, because if not I would be using a 4 years old distro). This installation has passed by 3 different mainboard, with different chipsets, and different hard drives, and it just works like the first day. No problem at all.

  10. #9
    Linux Guru bigtomrodney's Avatar
    Join Date
    Nov 2004
    Location
    Ireland
    Posts
    6,133
    Quote Originally Posted by i92guboj View Post
    Either way, nothing stops you from putting ALL the drivers in the initrd. I don't know how ubuntu does that, but, indeed, if it only puts one driver on the initrd, then it's just another example of another thing that ubuntu does in a bad way.
    I have to disagree with that. Firstly the purpose of an initrd is to boot the kernel. If you're going to have every driver built in you are treating the initrd as a kernel. I've been caught out on Ubuntu, openSUSE and Mandriva when switching motherboards out because the IDE and/or SATA controller was different and the initrd of the installed distro couldn't load the kernel. Ubuntu is not some black sheep in this regard, this would seem to be the standard behaviour.

    I think the discussion changed a little here from "it just works" to "it works if you do the preparation and modify the initrd/kernel". I agree it is possible to prepare for this. The critical piece is that the controller drivers need to be present at the initrd stage.

    You could check out the OEM install option on the Ubuntu Alternative discs to get going, then image that system...seems to be very similar to sysprep for Win2K or XP. Another alternative would be to use AptOnCD if there are enormous amounts of software to be installed afterwards.

    However I would second the PXE boot method mentioned above before considering anything else.

  11. #10
    Linux Guru
    Join Date
    Nov 2007
    Location
    Córdoba (Spain)
    Posts
    1,513
    Quote Originally Posted by bigtomrodney View Post
    I have to disagree with that. Firstly the purpose of an initrd is to boot the kernel. If you're going to have every driver built in you are treating the initrd as a kernel. I've been caught out on Ubuntu, openSUSE and Mandriva when switching motherboards out because the IDE and/or SATA controller was different and the initrd of the installed distro couldn't load the kernel. Ubuntu is not some black sheep in this regard, this would seem to be the standard behaviour.
    Well, that's why I stay away from binary based distros. In 1400 it was the "standard behavior" to believe the Earth is flat. Which doesn't make it true.

    I think the discussion changed a little here from "it just works" to "it works if you do the preparation and modify the initrd/kernel". I agree it is possible to prepare for this. The critical piece is that the controller drivers need to be present at the initrd stage.
    Sure. But if you want something that operates on a very concrete and custom way, you usually need to work a bit for it. By building statically you save yourself the hassle of using an initrd, by the way.

    However I would second the PXE boot method mentioned above before considering anything else.
    Installing with pxe is a valid alternative and is what I would use if you are not willing to read a bit and recompile your kernel (as I said, it's not THAT hard and you can use the config of the distro kernel). I fail to see the difficulty on that.

Page 1 of 2 1 2 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •