trouble with raid 5
i am new to linux and software raid, so if i post in the wrong place or if my problem has been answered; please point me in the right direction.
i am using mandriva x86-64 free (installed on separate drive from raid) for local file serving with software raid 5(formatted as ntfs) with 4x 2tb through cheap pci addin card (i think some sort of silicon image). it was working great until it disappeared yesterday (possible power outage). just the folder is there and properties say 1.7 GiB of 11.8GiB. in control center, under local dicks, to manage partitions the raid tab has disappeared, but all drives still show up. how do i rebuild it i have around 3tb of data that i need or is it lost. please help
You formatted this with NTFS? In a *Nix system......?
This is a common result of NTFS corruption (other filesystems as well.) Your *best* bet would be to boot to a Windows environment and run a Windows chkdsk. Linux has no NTFS check utility.
This also requires that the RAID set will build/mount under Windows. So if you used the Linux software md driver, you are out of luck. If it's done thru the SI card, it may show up in Windows.
the drives are formmatted as linux raid, but my friend told me since all other pc ran windows that formatting raid5 as ntfs would be best. the card was only used to gain more sata ports(not configed for raid). am i using linux md driver? if so is there any other way?
Your friend is very wrong. Yes, it sounds like you're using the md driver. (Can confirm by looking at df -ah and seeing if /dev/mdX is mounted.)
No, there is really not any other way if the RAID set is built using the md driver. The NTFSPROGS package (which is now deprecated) has an "ntfsfix" command. But read the manual:
ntfsfix - fix common errors and force Windows to check NTFS
ntfsfix is a utility that fixes some common NTFS problems. ntfsfix is NOT a Linux version of chkdsk. It only
repairs some fundamental NTFS inconsistencies, resets the NTFS journal file and schedules an NTFS consistency
check for the first boot into Windows.
You may run ntfsfix on an NTFS volume if you think it's damaged and it can't be mounted.
this is what i get when ran
[root@itx itx]# df -ah
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 12G 6.6G 4.7G 59% /
none 0 0 0 - /proc
/dev/sdb6 214G 439M 214G 1% /home
none 0 0 0 - /proc/sys/fs/binfmt_misc
rpc_pipefs 0 0 0 - /var/lib/nfs/rpc_pipefs
nfsd 0 0 0 - /proc/fs/nfsd
gvfs-fuse-daemon 0.0K 0.0K 0.0K - /home/itx/.gvfs
i dont see /dev/mdx
Is the RAID set mounted? I don't see it...
* I think you're confused about the "free space" numbers you posted. If a folder doesn't have a volume mounted under it, it will report the disk numbers of the parent filesystem (in your case, I assume that is /)
You may need to manually fix your RAID set to see what's going on. Since the RAID set isn't even running/active, you don't know if there is a filesystem issue.
See if any md device is running:
See what the last saved config of md looks like:
See if all disks are showing up:
i am using the graphical control center for partitions and dont see it, is there a way to mount and not messing anything up. the folder still shows up under /itx/home
for cat /proc/mdstat i get
[root@itx itx]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>
for cat /etc/mdadm.cof
[root@itx itx]# cat /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
ARRAY /dev/md0 UUID= auto=yes
for fdisk -1
root@itx itx]# fdisk -1
fdisk: invalid option -- '1'
Usage: fdisk [-b SSZ] [-u] DISK Change partition table
fdisk -l [-b SSZ] [-u] DISK List partition table(s)
fdisk -s PARTITION Give partition size(s) in blocks
fdisk -v Give fdisk version
Here DISK is something like /dev/hdb or /dev/sda
and PARTITION is something like /dev/hda7
-u: give Start and End in sector (instead of cylinder) units
-b 2048: (for certain MO disks) use 2048-byte sectors
That's fdisk dash l (L as in Larry.)
Use copy/paste if needed. You need to see if all of the disks are seen by the kernel (fdisk -l) and if at least 3 drives are seen, you can attempt a manual start of the RAID set using the --force switch, such as:
* If all 4 disks are seen by fdisk, you can try and use all 4.
mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
If one of the disks has failed, there are plenty of examples online on how to replace a failed disk in an md RAID5 set - Example.
many thanks all drives were there and manual switch work 3 drives for raid 4th as spare. mounted raid and have all data back. i am going to make a backup server right now so i wont have the problem again.
again many thanks