Find the answer to your Linux question:
Results 1 to 2 of 2
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    mdadm raid 5 won't reactive after fault

    I have a raid 5 array with 8 drives. One drive died and the machine was restarted. The dead drive is not recognizable in bios or anything so that was replaced. The 7 other drives all have their superblock in place but when I try to reassemble it says there is not enough disks to do so and creates the md file in an inactive state with those drives.

    I do not have the conf as somehow the person who started this got the OS on this raid so I"m doing this from a live disk.

    I see suggestions to create the raid again with a missing drive and then add the spare drive I have in so it will recreate, no data loss. Not sure if that's a better choice then assemble with force or not. It's about 7TB so I'd prefer not to go the wrong route.

    Thanks in advance,

  2. #2
    Linux Guru Lazydog's Avatar
    Join Date
    Jun 2004
    The Keystone State
    Have you run and raid commands on this raid yet? Like the one to remove the drive from the raid and then after replacing the drive the one to re-add the new drive?

    Have a look HERE


    The adventure of a life time.

    Linux User #296285
    Get Counted

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts