Find the answer to your Linux question:
Results 1 to 2 of 2
I'm new to managing a RAID-5 array here at work. This is now the second drive that has failed on us out of a batch of 4 in less than ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Apr 2011
    Posts
    3

    Unable to replace drive in software RAID-5 array (Fedora 14)


    I'm new to managing a RAID-5 array here at work. This is now the second drive that has failed on us out of a batch of 4 in less than a year and we're quite unhappy about it (looking at you, WD). I can't seem to get the new drive to be accepted into the RAID.

    Here's my setup:

    OS: Fedora 14
    Drives: 1 System drive, 3 exisiting 2TB drives in RAID-5 (of the original 4) All SATA


    So, I removed the old failing drive (it has been sent off to WD) and inserted my new drive.

    Upon boot, I loaded up palimpsest and clicked on the RAID listed on the left:

    State: Not running, partially assembled
    Components: 4
    in the red "Volume" bar graphic it says "RAID Array is not running" however, the "Stop RAID Array" button is enabled. It also says "/dev/md127" in the title bar.

    When I click Edit Components, and Add Spare, it allows me to add the new drive.
    Then, back on the main palimpsest screen, it still shows the RAID as not running....I click Start Array and gives the following error:

    Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
    mdadm: Not enough devices to start the array.

    This error occurs whether I add the new drive or not.


    [aside] Is this normal that it automounts the RAID to md127 on boot, but tries to be md0 after stopping then restarting?


    Also, when I do a /sbin/mdadm --detail /dev/md127, I get the following:


    # /sbin/mdadm --detail /dev/md127
    /dev/md127:
    Version : 1.2
    Creation Time : Mon May 9 11:46:07 2011
    Raid Level : raid5
    Used Dev Size : 1953510400 (1863.01 GiB 2000.39 GB)
    Raid Devices : 4
    Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Nov 15 15:55:43 2011
    State : active, FAILED, Not Started
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0

    Layout : left-symmetric
    Chunk Size : 512K

    Name : :RAID
    UUID : 68fba474:da41eab6:7071f9fc:baa56434
    Events : 56793

    Number Major Minor RaidDevice State
    0 8 17 0 active sync /dev/sdb1
    1 8 1 1 active sync /dev/sda1
    2 0 0 2 removed
    3 0 0 3 removed



    It's as if it is saying there are only 2 drives connected when I definitely have the original 3 (plus the new one).



    How do I get this sorted out? I am not familiar with using mdadm or any other text-based admin of RAID..... These drives contain important medical research data backups and I absolutely must get it back. (some stuff on it has no other backups I've now realized)

    Thanks in advance,

    Matt

  2. #2
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,755
    Assuming it is a hot swap enclosure, remove the new drive, start the system, verify that the RAID volume is running in "degraded" mode, install the new drive. You may need to start the System Tools -> Disk Utility application and tell it to rebuild the array if it doesn't start the rebuild automatically. Whatever you do, don't stop/restart it.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •