Results 1 to 5 of 5
Thread: mdadm superblock mount issue
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Sep 2012
mdadm superblock mount issue
I am having an issue getting my mdadm raid 5 to mount. It seems like it has gotten a bad superblock or lost the filesystem somehow. This happened after a reboot, the raid was working before and I have about 1.8TB of data on it.
# fdisk -l Disk /dev/sda: 2000 GB, 2000396321280 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 243201 1953512001 fd Lnx RAID auto Disk /dev/sdb: 2000 GB, 2000396321280 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 243191 1953431676 fd Lnx RAID auto Warning: Partition 1 does not end on cylinder boundary. Disk /dev/sdc: 2000 GB, 2000396321280 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 243191 1953431676 fd Lnx RAID auto Warning: Partition 1 does not end on cylinder boundary. Disk /dev/sdd: 8 GB, 8217054720 bytes 255 heads, 63 sectors/track, 999 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 * 1 1000 8032468 b FAT32 Warning: Partition 1 does not end on cylinder boundary. Error: /dev/md127: unrecognised disk labelCode:
# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md127 : active (auto-read-only) raid5 sdc1 sdb1 sda1 3906858624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] unused devices: <none>Code:
# mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Sun Sep 9 01:19:43 2012 Raid Level : raid5 Array Size : 3906858624 (3725.87 GiB 4000.62 GB) Used Dev Size : 1953429312 (1862.94 GiB 2000.31 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Sun Sep 9 20:39:11 2012 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : ubuntu:1 (local to host ubuntu) UUID : c1d3dc22:591547b6:9f40b25e:7cd7a3d9 Events : 26 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 1 1 active sync /dev/sda1 3 8 33 2 active sync /dev/sdc1Code:
# mount -t ext3 /dev/md127 4tb mount: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or soCode:
# dmesg | tail [ 20.413469] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 21.862008] r8169 0000:02:00.0: eth0: link up [ 21.868215] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 32.576026] eth0: no IPv6 routers present [ 8031.632455] EXT3-fs (md127): error: can't find ext3 filesystem on dev md127. [ 8031.652402] EXT4-fs (md127): VFS: Can't find ext4 filesystem [ 8031.672284] FAT-fs (md127): bogus number of FAT structure [ 8031.672290] FAT-fs (md127): Can't find a valid FAT filesystem [ 8031.696268] SQUASHFS error: Can't find a SQUASHFS superblock on md127 [ 8040.053649] EXT3-fs (md127): error: can't find ext3 filesystem on dev md127.
Thanks for any help that you may have.
Are you sure there is no LVM container around your partition stored across the harddrives? The raid seems to be functioning properly.
- Join Date
- Sep 2012
Thanks for the reply. I am sure there is no LVM. I ran these just to be sure:
# pvscan No matching physical volumes found # pvdisplay # lvm pvs
If you know that the filesystem should be ext3 (or at least ext2) and the first superblock is broken you can try to get to a backup superblock which ext2+3 creates by itself. How is described here:
Advanced Find ext2 ext3 Backup SuperBlock - CGSecurity
Beware from running fsck on a non-ext partition!! It will erase your data! The safest thing to do would be to dry-run the commands on a backup. If things go well, you could then run the same command on the original data and restore the partitions.
Anyway, it seems odd to me that a superblock is broken on a RAID 5. This should NEVER happen unless there is a hardware malfunction across more than one physical hard drive! Did the RAID fail sometime ago claiming that a disk is broken?
- Join Date
- Sep 2012
I tried to run the testdisk program on /dev/md127. It found a filesystem but was not able to do anything with it. It said it was not able to repair it or do anything to it such as list the superblocks.
As far as I know the raid never failed. It was working fine and then it stopped working after a reboot.