Software RAID0 mess up, please help!!!
I need help big time.
I have a file server running Red Hat Linux 7.3. It was built around 6 months ago and has 3 HDD's, one is for the primary linux installation, the other 2 (IBM Deskstar 120GB) are running Software RAID 0 as a public file store Samba'd to the rest of the home network. :)
I just got another 2 drives (the exact make) and I plugged one of the drives into where the CD-ROM was... except my biggest mistake was forgetting to change the new hard drive from Master to Slave :oops: And so on booting up (no slip ups from the BIOS) linux prepped the md0 (raid array) and noticed the second drive wasn't quite right, and it decided to modify the Superblock (assumedly on the first drive in the array)... I can't find it in the messages but I was sure it changed the fs from ext3 journaled to ext2, and then a prompt came up asking if I wanted to continue to boot... which came with a "no" from me and shutting the system down.
I rebooted (without the new drive) and it promptly spent 20 minutes fsck'ing the arrayed drives (where the message log said: "journal inode not in use, but contains data. CLEARED". So now when it tries to mount the array, first it reports "wrong fs type, bad option, bad superblock on device /dev/md0. or too many mounted files systems". Also when the md program (driver version 0.90.0) loads up, it reports that the raid personality 2 cannot be found (I'm guessing when the 2nd drive "disappeared") and that "do_md_run returns -22".
I cannot remember whether it is a linear raid 0, but is there any tool or command I can run where the raid driver is forced to remap all the data... or anyway to get the backup inode info returned? :cry: :cry: :cry: I'm sitting here with the thought that 210GB of data has disappeared all because of a Master/Master conflict.... arghhhhh! Either way I'm seriously considering lvm, even before this occurred. Any help would be appreciated.