Find the answer to your Linux question:
Results 1 to 10 of 10
Hello, I have been trying for some time now to build a Debian server with some hard drives in RAID (1 or 5) and one hard drive to be pulled ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Jul 2011
    Posts
    18

    Server with RAID and hot swappable drives


    Hello,

    I have been trying for some time now to build a Debian server with some hard drives in RAID (1 or 5) and one hard drive to be pulled out when off from the office (it contains all the data, in case of disaster at the office).

    However, I shut down the server when I remove the hard drive (hot swappable would be perfect!!) and the other drives change their names and go, for instance, from /dev/sda to /dev/sdb ...

    So the system will not reboot correctly when I have removed a hard drive.

    How should I proceed with the raid, fstab and other elements where each hard drive must be assigned a function and/or mounting point?

    I have been trying to play with the uuids, but it is becoming unmanageable

    Thank you in advance

  2. #2
    Just Joined!
    Join Date
    Jan 2011
    Location
    Fairfax, Virginia, USA
    Posts
    94
    Hi NiceLittleRabbit,
    Usually people use a UUID instead of a filesystem to accomplish what your looking for. Instead of sticking the device into /etc/fstab, use the filesystem UUID instead.

    You can see your UUID to stick in fstab with a command like this:
    Code:
    tune2fs -l  /dev/sda2 | grep UUID
    If the UUID was XXXXXXXX-XXXX-XXXXXXXXX-XXXXXXXXX, then your fstab would look like this:
    Code:
    UUID=XXXXXXXX-XXXX-XXXXXXXXX-XXXXXXXXX /                       ext4    defaults        1 1

  3. #3
    Just Joined!
    Join Date
    Jul 2011
    Posts
    18
    Hello Brian,

    That is how I am currently doing it, but I find it quite unmanageable. For instance, the format for the UUID is different in the mdadm.conf file than elsewhere. Also, it is more complicated to understand what disks are doing what.

    Isn't there another way? I am looking for a way of having hot swappable disks, so I need not shut down the server to pull off and on the hard drive. Does that exist?

  4. #4
    drl
    drl is offline
    Linux Engineer drl's Avatar
    Join Date
    Apr 2006
    Location
    Saint Paul, MN, USA / CentOS, Debian, Slackware, {Free, Open, Net}BSD, Solaris
    Posts
    1,283
    Hi.

    I don't know if labels would work everyplace that UUIDs would, but it might be worth an attempt:
    Code:
           Instead of giving the device explicitly, one may indicate the (ext2 or
           xfs) filesystem that is to be mounted by its UUID or volume label (cf.
           e2label(8) or xfs_admin(8)), writing LABEL=<label> or UUID=<uuid>,
           e.g., `LABEL=Boot' or `UUID=3e6be9de-8139-11d1-9106-a43f08d823a6'.
           This will make the system more robust: adding or removing a SCSI disk
           changes the disk device name but not the filesystem volume label.
     -- excerpt from man fstab
    Good luck ... cheers, drl
    Welcome - get the most out of the forum by reading forum basics and guidelines: click here.
    90% of questions can be answered by using man pages, Quick Search, Advanced Search, Google search, Wikipedia.
    We look forward to helping you with the challenge of the other 10%.
    ( Mn, 2.6.n, AMD-64 3000+, ASUS A8V Deluxe, 1 GB, SATA + IDE, Matrox G400 AGP )

  5. #5
    Just Joined!
    Join Date
    Jan 2011
    Location
    Fairfax, Virginia, USA
    Posts
    94
    Quote Originally Posted by NiceLittleRabbit View Post
    Hello Brian,

    That is how I am currently doing it, but I find it quite unmanageable. For instance, the format for the UUID is different in the mdadm.conf file than elsewhere. Also, it is more complicated to understand what disks are doing what.

    Isn't there another way? I am looking for a way of having hot swappable disks, so I need not shut down the server to pull off and on the hard drive. Does that exist?
    I think there are a bunch of UUIDs and it gets confusing. The UUID of the RAID for mdadm (from mdadm --detail) is different than the UUID of the filesystem (from tune2fs). For fstab or mount, I think you always want to use the UUID of the filesystem. If your filesystem resides on a RAID and if your RAIDs are constructed with mdadm to a known name (from mdadm.conf), then your can use the RAID device itself in fstab and not worry about the effects of devices (for instance, in grub you could say root=/dev/md2) changing name as your hardware environment changes

  6. #6
    Just Joined!
    Join Date
    Jul 2011
    Posts
    18
    Hello,

    I will try labels once I know how they work.

    On the UUID side, I have googled around but did not find how I can create and maintain a RAID array with UUIDs instead of /dev/sd[x]. Could you help?

    Cheers

  7. #7
    Just Joined!
    Join Date
    Jan 2011
    Location
    Fairfax, Virginia, USA
    Posts
    94
    The mdadm RAID UUIDs are constructed when the array is first assemebled and each RAID element contains this UUID. As far as I can tell, the intention of the UUID is so multiple RAIDs can be auto-assembled (probably with mdadm -As) as long as the file-system type is "linux raid autodetect". Your RAID UUID is probably in your /etc/mdadm.conf file.

    Code:
    [root@desktop ~]# mdadm --detail /dev/md127 | grep UUID
               UUID : XXXXXXXX:XXXXXXXX:XXXXXXXX:XXXXXXXX
    Code:
    [root@desktop ~]# cat  /etc/mdadm.conf
    [...]
    ARRAY RAID metadata=1.2 UUID=XXXXXXXX:XXXXXXXX:XXXXXXXX:XXXXXXXX level=6 
    [...]

  8. #8
    Just Joined!
    Join Date
    Jul 2011
    Posts
    18
    Hello Brian,

    Indeed, /etc/mdadm/mdadm.conf contains the /dev/md0 UUID. However, when I shut down the server, add an additional drive and reboot, the /dev/md0 is built differently (apparently my new drive replaces the former /dev/sda or something).

    So I was thinking that if I could tell RAID which UUIDs make the RAID array, then this would no longer be an issue. Can I do that somewhere?

    I would have guessed /etc/mdadm/mdadm.conf was the place, but I do not see how this should be done.

    Thanks in advance

  9. #9
    Just Joined!
    Join Date
    Jan 2011
    Location
    Fairfax, Virginia, USA
    Posts
    94
    This is how I think its supposed to work ... I'm not an expert so bare with me:

    Somewhere during the boot cycle, a command something like this:
    Code:
    mdadm -As
    is invoked to scan and assemble your RAID. The -s option (among other things) means mdadm is allowed to use your mdadm.conf file. In your mdadm.conf file, you probably have a UUID for your array ... that UUID is referenced by mdadm. Next, mdadm scans all your partitions (well, sort of) for the "System" tag 0xfd which is "Linux raid autodetect". Each of these discovered devices is interograted by mdadm and the device's destination array UUID is disclosed. The interogation happens by examining each device's metadata. You can see this metadata by issuing a command like:
    Code:
    mdadm --examine /dev/sda2
    Finally, for each destination array, mdadm makes a decision to start the array depending on what other arguments it was invoked with and how many devices were discovered.


    Specific block device names (like /dev/sda2) are irrelivant because mdadm typically exaustively searches all devices. The only limitation I am aware of is the DEVICE and AUTO lines from mdadm.conf (but I am not an expert) which may limit the scope of what mdadm searches.

    What you may be seeing on your end is an illusion. If your RAID is assembled in the initrd phase of boot, the illusion could be caused by your initrd's mdadm.conf being different than your root fs's mdadm.conf. You can disassemble your initrd with cpio and see if there is a problem there.

    Mechanically, I'm not sure how your RAID could be assembled incorrectly ... you could experiment with using fdisk or parted to change the "System" tag from 0xfe to some other identifier so mdadm won't search the bad partitions.

    All of this is described pretty will in man mdadm and man mdadm.conf.

  10. #10
    Just Joined!
    Join Date
    Jul 2011
    Posts
    18
    Thanks Brian,

    Should mdadm work this way, it should indeed not be possible to get it wrong. So I will investigate this issue further to understand what is wrong.

    Thank you for your having shed some light on my ignorance.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •