Results 1 to 2 of 2
I have a raid 5 partition /dev/md0 that is no longer mounted after reboots. The file system created on that mount point is xfs. I have done the following to ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
- 06-15-2008 #1
- Join Date
- Jun 2008
[SOLVED] xfs over raid5 mount fail
The file system created on that mount point is xfs.
I have done the following to debug this but do not understand the root cause. Can anyone help me understand what is the issue and how to fix it (I am not an expert in this stuff).
This has happened "out of the blue" after a couple of years of trouble free running.
fsck 1.40.2 (12-Jul-2007)
If you wish to check the consistency of an XFS filesystem or
repair a damaged filesystem, see xfs_check( and xfs_repair(.
xfs_check: /dev/md0 is invalid (cannot read first 512 bytes)
mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
md0 : inactive sda5(S) sdd5(S) sdc5(S) sdb5(S)
unused devices: <none>
So it appears that the raid5 is "inactive".
How can I find out why?
How can I activate it?
There is nothing reported in here that indicates the problem (to me).
I pretty sure thee is not an out and out disk failure because
sda1 has /boot
sda2 has /
sdb2 has /var
sdc2 has /tmp
sdd2 has swap
I can see /tmp, /var and (obviously) /.
Though no swap is used, "free" indicates it correctly so I assume it is ok.
Linux nas 126.96.36.199-31-default #1 SMP 2007/09/21 22:29:00 UTC i686 athlon i386 GNU/Linux
dmesg | more
(relevant excerpts - seems to indicate that RAID is the problem)
ieee1394: Host added: ID:BUS[0-00:1023] GUID[0011d80000916254]
loop: module loaded
SGI XFS with ACLs, security attributes, realtime, large block numbers, no debug enabled
SGI XFS Quota Management subsystem
XFS: SB read failed
md: could not bd_claim sda5.
md: could not bd_claim sdb5.
md: could not bd_claim sdc5.
md: could not bd_claim sdd5.
md: autorun ...
md: ... autorun DONE.
Any help/suggestions *much* appreciated!
- 06-18-2008 #2
- Join Date
- Jun 2008
Run 'mdadm -E /dev/X' for each raided partition.
This showed one partition marked as failure.
I think (since other partitions on this disk are ok) there was a data corruption which was causing the raid to fail in general.
mdadm -A /dev/md0 -f -U summaries /dev/sda5 /dev/sdb5 /dev/sdc5 /dev/sdd5
which seemed to get it going again, apparently ok.
After backing up the raid mount, I then rebuilt it by simply adding in the dodgy partition again...
mdadm /dev/md0 --add /dev/sdb5
Doing 'cat /proc/mdstat' showed progress and ETA nicely.