Find the answer to your Linux question:
Results 1 to 3 of 3
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    Create mdadm 5 raid array - after unplug hdd - fail


    Today i want create a raid 5 array using mdadm.

    I have that array

    :~# mdadm --detail /dev/md0
            Version : 00.90
      Creation Time : Fri Jun 26 14:40:14 2009
         Raid Level : raid5
         Array Size : 2097024 (2048.22 MiB 2147.35 MB)
      Used Dev Size : 1048512 (1024.11 MiB 1073.68 MB)
       Raid Devices : 3
      Total Devices : 3
    Preferred Minor : 0
        Persistence : Superblock is persistent
        Update Time : Fri Jun 26 14:48:56 2009
              State : clean
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 0
             Layout : left-symmetric
         Chunk Size : 64K
               UUID : h7b682ed:bc4a32e6:bda6ad5b:0d479e64 (local to host lan_beta)
             Events : 0.726
        Number   Major   Minor   RaidDevice State
           0       8       16        0      active sync   /dev/sdb
           1       8       32        1      active sync   /dev/sdc
           2       8       48        2      active sync   /dev/sdd

    mounted, working:
    df -m
    /dev/md0                  2016        18      1896   1% /root/test

    Then, i want do a failure test.
    Shutdown server, and remove one of 3 HDD.

    Boot it up:

    mdadm --detail /dev/md0
    mdadm: md device /dev/md0 does not appear to be active

     cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md0 : inactive sdc[0](S) sdb[1](S)
          2097024 blocks
    unused devices: <none>
    Do you have any ideas, why my array go fal :/
    I read, the RAID 5, should provide security for n-1 disk failure.
    But in my case, whole array goes down.


  2. #2
    You need to read more about the md driver. You didn't test your "failure" correctly. A failure doesn't happen when you nicely shut down the machine and pull and drive and then reboot. A failure is when the array is online and one of the drives disappears. When that happens, the volume will still be online.

    If the OS is booting and an array member is missing, it will not mount the array by default. You can force it to assemble using mdadm commands.

  3. #3
    Thank you for reply,

    Maybye you have right, i don't have hot swap interface to test it.

    Thank you for explain the problem.

    I use you're command to start array

    mdadm  --force --run /dev/md0
    And everything start working.

  4. $spacer_open

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts