Find the answer to your Linux question:
Results 1 to 10 of 10
Like Tree1Likes
  • 1 Post By Rubberman
I am having a problem getting my mdadm raid 5 to mount. Something peculiar happened last night, and MythTV hung when playing a movie stored on the raid. I decided ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Feb 2013
    Posts
    5

    Raid 5 suddenly doesn't contain a valid partition table


    I am having a problem getting my mdadm raid 5 to mount. Something peculiar happened last night, and MythTV hung when playing a movie stored on the raid. I decided to reboot the server, and on restarting, thew raid system failed to mount.

    It has x5 2TB disks, and is about 80% full. Looking in gparted, it says that the disk is unallocated. Looking in disk utility, it sees 8TB unknown volume and partition type.

    The individual disk's SMART status is fine. So I attempted a check and repair array through disk utility last night, without issues, but no change in trying to remount the drive.

    I'm bamboozled why it would suddenly fail with potential terminal data loss.

    # fdisk -l
    Code:
    Disk /dev/sda: 120.0 GB, 120034123776 bytes
    255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000536f8
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1       218056704   234440703     8192000    5  Extended
    /dev/sda2   *        2048   218056703   109027328   83  Linux
    /dev/sda5       218058752   234440703     8190976   82  Linux swap / Solaris
    
    Partition table entries are not in disk order
    
    Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
    81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x935e2041
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1            2048  3907029167  1953513560   fd  Linux RAID autodetect
    
    WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
    
    
    Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdc1               1  3907029167  1953514583+  ee  GPT
    
    WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.
    
    
    Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdd1               1  3907029167  1953514583+  ee  GPT
    
    WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted.
    
    
    Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sde1               1  3907029167  1953514583+  ee  GPT
    
    Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes
    81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x000ce637
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdf1            2048  3907029167  1953513560   fd  Linux RAID autodetect
    
    Disk /dev/md127: 8001.6 GB, 8001589084160 bytes
    2 heads, 4 sectors/track, 1953512960 cylinders, total 15628103680 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 524288 bytes / 2097152 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md127 doesn't contain a valid partition table
    $cat /proc/mdstat
    Code:
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : active raid5 sde1[0] sdf1[5] sdb1[6] sdc1[2] sdd1[1]
          7814051840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
          bitmap: 0/15 pages [0KB], 65536KB chunk
    
    unused devices: <none>
    $ mdadm --detail /dev/md127
    Code:
    mdadm: excess address on MAIL line: ARRAY - ignored
    mdadm: excess address on MAIL line: /dev/md127 - ignored
    mdadm: excess address on MAIL line: UUID=f9099a38:9bd89ac8:a955705d:3a3244ad - ignored
    /dev/md127:
            Version : 1.2
      Creation Time : Sat Aug 27 15:57:01 2011
         Raid Level : raid5
         Array Size : 7814051840 (7452.06 GiB 8001.59 GB)
      Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
       Raid Devices : 5
      Total Devices : 5
        Persistence : Superblock is persistent
    
      Intent Bitmap : Internal
    
        Update Time : Thu Feb 21 16:33:48 2013
              State : active 
     Active Devices : 5
    Working Devices : 5
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 512K
    
               Name : :raid array
               UUID : f9099a38:9bd89ac8:a955705d:3a3244ad
             Events : 109350
    
        Number   Major   Minor   RaidDevice State
           0       8       65        0      active sync   /dev/sde1
           1       8       49        1      active sync   /dev/sdd1
           2       8       33        2      active sync   /dev/sdc1
           6       8       17        3      active sync   /dev/sdb1
           5       8       81        4      active sync   /dev/sdf1
    $ mount -t ext4 /dev/md127 /media/raid
    Code:
    mount: wrong fs type, bad option, bad superblock on /dev/md127,
           missing codepage or helper program, or other error
           In some cases useful info is found in syslog - try
           dmesg | tail  or so
    $ dmesg | tail
    Code:
    [11227.738489]  disk 3, o:1, dev:sdb1
    [11227.738491]  disk 4, o:1, dev:sdf1
    [11227.738671] created bitmap (15 pages) for device md127
    [11227.739507] md127: bitmap initialized from disk: read 1/1 pages, set 0 of 29809 bits
    [11227.762702] md127: detected capacity change from 0 to 8001589084160
    [11227.773253]  md127: unknown partition table
    [11249.124497] EXT4-fs (md127): VFS: Can't find ext4 filesystem
    [11253.574709] EXT4-fs (md127): VFS: Can't find ext4 filesystemfilesystem on dev md127.
    I did notice that ls /dev/md* gave md0 as well as md127, so just to make sure, I edited mdadm.conf - the commented out line is the old line

    $cat /etc/mdadm/mdadm.conf
    Code:
    # ARRAY /dev/md/:raid metadata=1.2 name=:raid UUID=f9099a38:9bd89ac8:a955705d:3a3244ad
     ARRAY /dev/md127 UUID=f9099a38:9bd89ac8:a955705d:3a3244ad
    $cat /etc/fstab
    Code:
    /dev/md127      /media/raid     ext4    defaults        0       2
    Does anyone have an idea what is going on here, and if possible can I resurrect the raid and save the data?
    Last edited by laughing_man77; 02-22-2013 at 02:36 AM. Reason: remove username from bash code

  2. #2
    Just Joined!
    Join Date
    Feb 2013
    Posts
    5
    could this perhaps be because 3 of the disks in the array are GRP, nd not GNU parted?
    Code:
    WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted.
    I also reverted the changes that I made to mddadm.conf after the failed mounting:
    Code:
    $ vim /etc/mdadm/mdadm.conf
    ARRAY /dev/md/:raid metadata=1.2 name=:raid UUID=f9099a38:9bd89ac8:a955705d:3a3244ad
    Code:
    $ mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    Code:
    $ mdadm --assemble --force /dev/md127 /dev/sd[bcdef]1
    mdadm: /dev/md127 has been started with 5 drives.
    Code:
    $ mdadm --detail /dev/md127
    /dev/md127:
            Version : 1.2
      Creation Time : Sat Aug 27 15:57:01 2011
         Raid Level : raid5
         Array Size : 7814051840 (7452.06 GiB 8001.59 GB)
      Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
       Raid Devices : 5
      Total Devices : 5
        Persistence : Superblock is persistent
    
      Intent Bitmap : Internal
    
        Update Time : Thu Feb 21 17:03:43 2013
              State : active 
     Active Devices : 5
    Working Devices : 5
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : left-symmetric
         Chunk Size : 512K
    
               Name : :raid array
               UUID : f9099a38:9bd89ac8:a955705d:3a3244ad
             Events : 109350
    
        Number   Major   Minor   RaidDevice State
           0       8       65        0      active sync   /dev/sde1
           1       8       49        1      active sync   /dev/sdd1
           2       8       33        2      active sync   /dev/sdc1
           6       8       17        3      active sync   /dev/sdb1
           5       8       81        4      active sync   /dev/sdf1
    So I got excited here. It recognises the raid level, the disks are active, raid is active, Superblock is persistent...
    Code:
    $ mount /media/raid 
    mount: wrong fs type, bad option, bad superblock on /dev/md127,
           missing codepage or helper program, or other error
           In some cases useful info is found in syslog - try
           dmesg | tail  or so
    ...and I'm back to square one.

    I'm guessing that this is probably something to do with the EXT4 partition in the raid.
    $ cat /proc/partitions
    Code:
    major minor  #blocks  name
    
       8        0  117220824 sda
       8        1          1 sda1
       8        2  109027328 sda2
       8        5    8190976 sda5
       8       16 1953514584 sdb
       8       17 1953513560 sdb1
       8       32 1953514584 sdc
       8       33 1953514550 sdc1
       8       48 1953514584 sdd
       8       49 1953514550 sdd1
      11        0    1048575 sr0
       8       64 1953514584 sde
       8       65 1953514550 sde1
       8       80 1953514584 sdf
       8       81 1953513560 sdf1
       9      127 7814051840 md127
    and as a recap:

    $ fdisk -l /dev/md127
    Code:
    Disk /dev/md127: 8001.6 GB, 8001589084160 bytes
    2 heads, 4 sectors/track, 1953512960 cylinders, total 15628103680 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 524288 bytes / 2097152 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md127 doesn't contain a valid partition table
    The only thing that seems to missing is the partition . I had wondered if I had caused the loss of the partition by attempting to rebuild. But the partition was already reported as missing at the first reboot, although I may have compounded the problem with the attempted rebuild...

  3. #3
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,380
    This sort of thing has only happened to me when the fan failed on my drive enclosure when I was away one weekend. I got back, and overheating pretty much fubar'd the drives in the array. If both the source and parity drives containing the information for the partition table had failed, then this situation can occur. If you have the configuration of the partition table saved somewhere (hard copy is good - even hand-written), then you might be able to restore the PT and get your data back. If the data is very valuable to you, then there are recovery services who can pull your data off the array (even after failure), but at quite a price. I was quoted over $1000USD to recover a 2TB array (4x500GB). In your case, it will likely be somewhat (sic) more...
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  4. #4
    Linux Engineer
    Join Date
    Apr 2012
    Location
    Virginia, USA
    Posts
    880
    LaughingMan, I think in your case, since you (or someone) shelled out the clams for 5 2TB drives, there's no good reason you shouldn't have bought a hw raid controller.

    RAID 5 is very finicky as it is, and I wouldn't touch a software RAID 5 with a 10 foot poll.

  5. #5
    Linux Guru Lazydog's Avatar
    Join Date
    Jun 2004
    Location
    The Keystone State
    Posts
    2,677
    If I am not mistaking GTP is used by windows. Something or someone has changed your disk partitions either on purpose or by accident. You are going to have to fix those drives first.

    Regards
    Robert

    Linux
    The adventure of a life time.

    Linux User #296285
    Get Counted

  6. #6
    Just Joined!
    Join Date
    Feb 2013
    Posts
    5
    Thanks for replying guys, I really appreciate it.

    @rubberman, luckily the data is not mission critical. There is some data in there that I would prefer to preserve, but it's not worth the $1,000's it would cost to use a professional service. But thanks for the suggestion, and it's good to know how expensive that route would have been.

    @mizzle, the raid stated out quite a bit smaller, but the server was purposefully chosen with a large, ugly case that could contain new drives when I wanted to add them. I must admit, the next upgrade planned was towards NAS storage, and physically separating data storage form the server OS. However, that doesn't solve the immediate question of whether I can save the data and/or the raid in it's current state, rather than just jumping straight in, formatting, and starting again.

    @lazydog, according to wikipedia (en.wikipedia.org/wiki/GUID_Partition_Table#Operating_System_support_of_G PT), and ubuntuforums.org/showthread.php?t=1952953 :
    As of 2010, most current operating systems support GPT, although some (including OS X and Microsoft Windows) only support booting to GPT partitions on systems with EFI firmware.
    and... GUID Partition Table (GPT) is a standard created by Intel to replace the legacy MBR, which was limited to the disks of around 2.2GB.

    I don't think it is solely used by Windoze and OSx, and is being used in Linux aswell. But fdisk struggles to work with it at present, but gparted works with it.

    I was trying to figure out where GUID vs MBR partitions came from, and I think I've worked it out that I used gparted to format the 2 new disks. Here's another snippet from the above wikipedia link:
    Some distribution tools, such as fdisk, don't work with GPT. New tools such as gdisk,[10] GNU Parted,[11][12] Syslinux, grub 0.96+patches and grub2 have been GPT-enabled.
    I'm not sure if having 3 MBR disks and 2 GUID (GPT) partition disks is the source of the trouble, but I've had those 2 disks in the raid for over a year.

    When I was poking around in gparted just now, I noticed that it has an option to "attempt to rescue the disk", for each of the disks, including the raid disk (md127). Perhaps this could be worth a last ditch attempt, because I assume if it fails to rescue the disk, that all data will be lost (if it isn't already).

  7. #7
    Linux Guru Lazydog's Avatar
    Join Date
    Jun 2004
    Location
    The Keystone State
    Posts
    2,677
    Quote Originally Posted by laughing_man77 View Post
    @lazydog, according to wikipedia (en.wikipedia.org/wiki/GUID_Partition_Table#Operating_System_support_of_G PT), and ubuntuforums.org/showthread.php?t=1952953 :

    and... GUID Partition Table (GPT) is a standard created by Intel to replace the legacy MBR, which was limited to the disks of around 2.2GB.

    I don't think it is solely used by Windoze and OSx, and is being used in Linux aswell. But fdisk struggles to work with it at present, but gparted works with it.

    I was trying to figure out where GUID vs MBR partitions came from, and I think I've worked it out that I used gparted to format the 2 new disks. Here's another snippet from the above wikipedia link:

    I'm not sure if having 3 MBR disks and 2 GUID (GPT) partition disks is the source of the trouble, but I've had those 2 disks in the raid for over a year.

    When I was poking around in gparted just now, I noticed that it has an option to "attempt to rescue the disk", for each of the disks, including the raid disk (md127). Perhaps this could be worth a last ditch attempt, because I assume if it fails to rescue the disk, that all data will be lost (if it isn't already).
    I think you missed the error that was given to you,

    Code:
    WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
    This warning tells you that your fdisk doesn't support GPT.

    Regards
    Robert

    Linux
    The adventure of a life time.

    Linux User #296285
    Get Counted

  8. #8
    Just Joined!
    Join Date
    Feb 2013
    Posts
    5
    Hi Lazydog,

    Sorry. I did see it, but assumed it was not important. My understanding is that parted and fdisk are essentially the same, just different programs that do similar things. Am I incorrect here?

    If this is an issue, I'm guessing that there's not much I can do to save the data here, because 2 disks are in this format. Unless you know how I can change the partition format without wiping out orignal partition or the rais stripes?

    Cheers, laughing_man77

  9. #9
    Linux Guru Rubberman's Avatar
    Join Date
    Apr 2009
    Location
    I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
    Posts
    11,380
    Well, arrays can be a problem - keep them backed up on a regular basis. I should know - my LVM (/home) just went south when one of the drives died! Needless to say, I have lost a LOT of data, much of which is unrecoverable... My bad, but a good learning exercise. So, don't trust ANYTHING! Backup early, and often...
    Lazydog likes this.
    Sometimes, real fast is almost as good as real time.
    Just remember, Semper Gumbi - always be flexible!

  10. #10
    Just Joined!
    Join Date
    Feb 2013
    Posts
    5
    Yep, thanks for your advice, guys. As I said before, it's much appreciated.

    I think I'm going to bite the bullet. I've sat on the server without any data for nearly a week now, dreading the moment when I was going to format the drives and vaguely hoping that there was a way I could resurrect the drive.

    It's formatting time...

    I already have backup script for database, wiki and public html folder. When this is complete, it's worth spending a morning creating a script to upload backups of selected folders my remote server.

    Cheers all!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •