Find the answer to your Linux question:
Results 1 to 2 of 2
Hello, I found out that the raid 1 was degraded: [rootnode /]# cat /proc/mdstat Personalities : [raid1] md3 : active raid1 sda5[0] sdb5[1] 1822445428 blocks super 1.0 [2/2] [UU] md2 ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Jan 2013
    Posts
    2

    F on one raid 1 device degraded raid


    Hello,

    I found out that the raid 1 was degraded:

    [rootnode /]# cat /proc/mdstat
    Personalities : [raid1]
    md3 : active raid1 sda5[0] sdb5[1]
    1822445428 blocks super 1.0 [2/2] [UU]

    md2 : active raid1 sda3[0](F) sdb3[1]
    1073741688 blocks super 1.0 [2/1] [_U]

    md1 : active raid1 sda2[0] sdb2[1]
    524276 blocks super 1.0 [2/2] [UU]

    md0 : active raid1 sda1[0] sdb1[1]
    33553336 blocks super 1.0 [2/2] [UU]

    so it seems to me that in partition md2 the device sda3 was Faulty.

    so I did try to remove and re-add it in order to repair it:

    [rootnode /]# mdadm --remove /dev/md2 /dev/sda3
    mdadm: hot removed /dev/sda3 from /dev/md2

    then I tried to add it again:

    [rootnode /]# mdadm --add /dev/md2 /dev/sda3
    mdadm: /dev/sda3 reports being an active member for /dev/md2, but a --re-add fails.
    mdadm: not performing --add as that would convert /dev/sda3 in to a spare.
    mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sda3" first.

    and the result is:

    [rootnode /]# cat /proc/mdstat
    Personalities : [raid1]
    md3 : active raid1 sda5[0] sdb5[1]
    1822445428 blocks super 1.0 [2/2] [UU]

    md2 : active raid1 sdb3[1]
    1073741688 blocks super 1.0 [2/1] [_U]

    md1 : active raid1 sda2[0] sdb2[1]
    524276 blocks super 1.0 [2/2] [UU]

    md0 : active raid1 sda1[0] sdb1[1]
    33553336 blocks super 1.0 [2/2] [UU]

    unused devices: <none>


    so

    md2 : active raid1 sda3[0](F) sdb3[1]
    1073741688 blocks super 1.0 [2/1] [_U]

    changed to

    md2 : active raid1 sdb3[1]
    1073741688 blocks super 1.0 [2/1] [_U]

    and I cant re-add the 2nd device,
    If I run mdadm --zero-superblock /dev/sda3
    would what erase my data from the partion? or would that create any other damage?

    I've searched the manual which says:

    --zero-superblock
    If the device contains a valid md superblock, the block is overwritten with zeros.
    With --force the block where the superblock would be is overwritten even if it doesn't appear to be valid

    but i;m not sure if this erases data and since the partition mounts to / in which all the data is, i'm scared to tried it without verification.

    If anyone knows that would be really helpfull.

  2. #2
    Just Joined!
    Join Date
    Jan 2013
    Posts
    2
    after lots of research I decided to go on with it

    [root@node ~]# mdadm --zero-superblock /dev/sda3
    [root@node ~]# mdadm /dev/md2 -a /dev/sda3
    mdadm: added /dev/sda3

    [root@node ~]# cat /proc/mdstat
    Personalities : [raid1]
    md3 : active raid1 sda5[0] sdb5[1]
    1822445428 blocks super 1.0 [2/2] [UU]

    md2 : active raid1 sda3[2] sdb3[1]
    1073741688 blocks super 1.0 [2/1] [_U]
    [>....................] recovery = 1.1% (12009408/107374168 finish=313.7min speed=56397K/sec

    md1 : active raid1 sda2[0] sdb2[1]
    524276 blocks super 1.0 [2/2] [UU]

    md0 : active raid1 sda1[0] sdb1[1]
    33553336 blocks super 1.0 [2/2] [UU]

    as you can see the result is the raid is recovering so everything is ok and --zero-superblock didn't damage any data.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •