Find the answer to your Linux question:
Results 1 to 2 of 2
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    Raid Soft : degraded array (resolved)

    Hello all,

    Following a big power failure my computer crashed, (the motherboard was dead).
    Once the motherboard changed, I saw that my raid 1 array (soft) was noted as "degraded"

    I have resolved this problem, and as it is not really easy to repair (I wasn't able to find info on google), I wanted to post something in order to help others in similar case. (google will index this page I hope)

    The problem :

    In the logs I saw thinks like

    md: kicking non-fresh hdc1 from array!
    kernel: ide1: BM-DMA at 0xffa8-0xffaf, BIOS settings: hdc:DMA, hdd:DMA
    kernel: hdc: max request size: 512KiB
    kernel: hdc: 398297088 sectors (203928 MB) w/8192KiB Cache, CHS=24792/255/63, UDMA(133)
    kernel: hdc: cache flushes supported
    kernel: hdc: hdc1
    kernel: md: bind<hdc1>
    kernel: md: unbind<hdc1>
    kernel: md: export_rdev(hdc1)
    My array is composed of /dev/hda1 and /dev/hdc1 to form a /dev/md0
    This /dev/md0 is used enterely for lvm2 logical volumes. My slash is on the array and also my /boot/

    The second part of the array was unable to be resync at boot

    mdadm --query --detail /dev/md0
    Showed something like

            Version : 00.90.03
      Creation Time : Sat Jan 14 14:35:17 2006
         Raid Level : raid1
         Array Size : 199141632 (189.92 GiB 203.92 GB)
        Device Size : 199141632 (189.92 GiB 203.92 GB)
       Raid Devices : 2
      Total Devices : 2
    Preferred Minor : 0
        Persistence : Superblock is persistent
        Update Time : Thu Jul 20 11:14:57 2006
              State : clean
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 0
      Spare Devices : 0
               UUID : 1a94bb53:1c629336:3129889f:fb3064cb
             Events : 0.12489983
        Number   Major   Minor   RaidDevice State
           0       3        1        0      active sync   /dev/hda1
           1       0        0        0      removed
    The second disk /dev/hdc was "removed" but not failed

    I tried to hot add /dev/hdc1 to the array by typing

    mdadm /dev/md0 --add /dev/hdc1
    But the system answered
    Cannot open /dev/hdc1 : Device or ressource busy
    So I though lvm was maybe in cause... and it was right

    As the /dev/hdc1 was unable to enter the array, the /dev/hdc1 was available to lvm
    (In the boot sequence there is "vgchange -a y vg0")

    So the device was busy by the kernel (that's why lsof showed nothing in use on /dev/hdc1)

    So, I open my initrd image, mount it, copy it to be able to modify it

    In the "script" file /mnt/initrd17tip-rw/script
    You can see how the kernel is assembling the array at boot time

    mdadm -A /dev/md0 -R -u 1a94bb53:1c629336:3129889f:fb3064cb /dev/hda1 /dev/hdc1
    So I add
    mdadm /dev/md0 --add /dev/hdc1
    just after the assembling line that was failing

    I recreate the initrd image by doing :

    mkcramfs initrd17tip-rw /boot/initrd.img-
    Rerun lilo and reboot

    after reboot, the array was reconstructing,

    mdadm --query --detail /dev/md0
    Showed something like
        Number   Major   Minor   RaidDevice State
           0       3        1        0      active sync   /dev/hda1
           1       0        0        0      removed
           2       ?        22      1      spare syncing /dev/hdc1
    Once the resync was ok, the mdadm show only two device synced

    I really hope it will help someone

  2. #2
    Just Joined! forgottentq's Avatar
    Join Date
    Jun 2006
    Virginia at the moment.
    something extremely similar happened to me yesterday.. this really helped! thanks.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts