Results 1 to 5 of 5
Enjoy an ad free experience by logging in. Not a member yet? Register.
- Join Date
- Dec 2010
can't activate raid0 array after kernel upgrade
I've been trying everything I can to solve this problem, but so far no luck.
I am a total newb at this though, only been using linux as a main OS for a few weeks.
I'm running Arch linux, kernel 2.6.36 (it was working fine on 2.6.33)
My raid array is on the onboard intel controller (Gigabyte EX58-UD3R mobo). It also has a GSATA controller which I have tried as well but it's the same problem with that one.
It's a 2 disk raid0 array and I use a ~300GB partition of it as a system drive for my Windows 7 installation.
I can boot from the array and windows works flawlessly on it, but as soon as I boot into linux it gets all messed up. I can't activate with dmraid or mdadm and when I reboot from linux the array FAILS in the raid bios showing only one drive as a part of the array. (both drives are detected though, but only one of them shows "raid-active").
It starts to work again after cold-boot though, which is pretty weird to me.
It's always the same disk that fails too.
I included some info from my kernel log and fdisk
the disks appear like this
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb876c595 Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1c7fba32 Device Boot Start End Blocks Id System /dev/sde1 * 2048 206847 102400 7 HPFS/NTFS /dev/sde2 206848 552962047 276377600 7 HPFS/NTFS /dev/sde3 552962048 3907033087 1677035520 7 HPFS/NTFS
Dec 6 05:04:54 behemoth kernel: sde: sde1 sde2 sde3 Dec 6 05:04:54 behemoth kernel: sde: p3 size 3354071040 extends beyond EOD, enabling native capacity Dec 6 05:04:54 behemoth kernel: ata9: hard resetting link
Dec 6 05:04:54 behemoth kernel: sd 8:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Dec 6 05:04:54 behemoth kernel: sde: detected capacity change from 1000203804160 to 1000204886016 Dec 6 05:04:54 behemoth kernel: sde: sde1 sde2 sde3 Dec 6 05:04:54 behemoth kernel: sde: p3 size 3354071040 extends beyond EOD, truncated Dec 6 05:04:54 behemoth kernel: sd 8:0:0:0: [sde] Attached SCSI disk
now I added this to my mdadm.conf, not sure if it's right though.
ARRAY /dev/md0 devices=/dev/sdd2,/dev/sde2
I get the following output
mdadm: /dev/sde2 has no superblock - assembly aborted
But I don't know how I can fix it.
If someone could point me in the right direction or even explain some of those things in the kernel log it would great.
Thanks in advance
Have you added that disk before withCode:
mdadm --add /dev/md0 /dev/sde2
- Join Date
- Dec 2010
But I never tried mdadm before I started having problems, I always used dmraid.
Just did "dmraid -ay" and it was automatically activated and the partitions were created as /dev/dm-2 and /dev/dm-3 so I could mount them.
If I try that now I get:
/dev/sdd: "jmicron" and "isw" formats discovered (using isw)! ERROR: isw: wrong number of devices in RAID set "isw_ceijhcahbb_TERARAOD" [1/2] on /dev/sdd RAID set "isw_ceijhcahbb_TERARAOD" was not activated ERROR: device "isw_ceijhcahbb_TERARAOD" could not be found
sudo mdadm --create /dev/md0 -l0 -n2 -c128 /dev/sde2 /dev/sdd2
mdadm: super1.x cannot open /dev/sdd2: No such file or directory mdadm: /dev/sdd2 is not suitable for this array. mdadm: create aborted
am I doing it wrong?
But like I said in the first post, as soon as I boot into linux, the second disk in the array is deactivated in the raid BIOS so I'm certain there is some underlying problem here.
I don't know, I'm gonna get some sleep and hack around some more in the "morning" :P
edit: stupid me, I seem to have permanently borked the array somehow. Thank god for backups...
I created a new one and now mdadm seems to be automatically creating this device on boot called /dev/md127 1717.3GB of size and with no partition table.
Last edited by raginaot; 12-06-2010 at 08:26 AM.
Indeed, you use the fake hardware RAID of your motherboard. That's always delicate.
To rebuild your array you should go with dmraid -R. Something like that:Code:
dmraid -R isw_ceijhcahbb_TERARAOD /dev/sde2
But I'm not an expert here at all so I won't be able to give you further advice if that doesn't work.
- Join Date
- Dec 2010
I'm not entirely sure what I did, but I started by permanently ****ing up the array with some mdadm command, then I deleted it in bios, then I had to delete all the partitions in another windows machine cause the table was so majorly fubar linux thought one of the drives was several PB in size :S
Then I created a new array, a few reboots, banged my head on the keyboard for a while and sent Linus Torvalds an angry letter and somehow it's working now.
Cheers for trying to help manko. appreciate it