Find the answer to your Linux question:
Results 1 to 5 of 5
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1

    can't activate raid0 array after kernel upgrade


    Hi
    I've been trying everything I can to solve this problem, but so far no luck.
    I am a total newb at this though, only been using linux as a main OS for a few weeks.

    I'm running Arch linux, kernel 2.6.36 (it was working fine on 2.6.33)

    My raid array is on the onboard intel controller (Gigabyte EX58-UD3R mobo). It also has a GSATA controller which I have tried as well but it's the same problem with that one.

    It's a 2 disk raid0 array and I use a ~300GB partition of it as a system drive for my Windows 7 installation.
    I can boot from the array and windows works flawlessly on it, but as soon as I boot into linux it gets all messed up. I can't activate with dmraid or mdadm and when I reboot from linux the array FAILS in the raid bios showing only one drive as a part of the array. (both drives are detected though, but only one of them shows "raid-active").

    It starts to work again after cold-boot though, which is pretty weird to me.
    It's always the same disk that fails too.

    I included some info from my kernel log and fdisk

    the disks appear like this
    Code:
    Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xb876c595
    
    Disk /dev/sdd doesn't contain a valid partition table
    
    Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x1c7fba32
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sde1   *        2048      206847      102400    7  HPFS/NTFS
    /dev/sde2          206848   552962047   276377600    7  HPFS/NTFS
    /dev/sde3       552962048  3907033087  1677035520    7  HPFS/NTFS
    and here's a few lines from my kernel.log that I thought might be relevant.

    Code:
    Dec  6 05:04:54 behemoth kernel: sde: sde1 sde2 sde3
    Dec  6 05:04:54 behemoth kernel: sde: p3 size 3354071040 extends beyond EOD, enabling native capacity
    Dec  6 05:04:54 behemoth kernel: ata9: hard resetting link
    and
    Code:
    Dec  6 05:04:54 behemoth kernel: sd 8:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
    Dec  6 05:04:54 behemoth kernel: sde: detected capacity change from 1000203804160 to 1000204886016
    Dec  6 05:04:54 behemoth kernel: sde: sde1 sde2 sde3
    Dec  6 05:04:54 behemoth kernel: sde: p3 size 3354071040 extends beyond EOD, truncated
    Dec  6 05:04:54 behemoth kernel: sd 8:0:0:0: [sde] Attached SCSI disk
    you can see the entire log here - pastebin.com/duWu8zYB

    now I added this to my mdadm.conf, not sure if it's right though.
    Code:
    ARRAY /dev/md0 devices=/dev/sdd2,/dev/sde2
    but when I do "mdadm --assemble --scan"
    I get the following output

    Code:
    mdadm: /dev/sde2 has no superblock - assembly aborted
    All I can think of is that it seems like the system is reading the partition table of /dev/sde as it would a single non-raided disk and therefore it looks like the partition is bigger then the disk it self so it looks like its failing.

    But I don't know how I can fix it.

    If someone could point me in the right direction or even explain some of those things in the kernel log it would great.

    Thanks in advance

    --raginaot

  2. #2
    Have you added that disk before with
    Code:
    mdadm --add /dev/md0 /dev/sde2
    If /dev/sde2 has never been part of the array before it doesn't have a RAID superblock.
    Refining Linux Advent calendar: “24 Outstanding ZSH Gems

  3. #3
    Quote Originally Posted by Manko10 View Post
    Have you added that disk before with
    Code:
    mdadm --add /dev/md0 /dev/sde2
    If /dev/sde2 has never been part of the array before it doesn't have a RAID superblock.
    I have not, and I can't since appearantly md0 doesn't exist
    But I never tried mdadm before I started having problems, I always used dmraid.
    Just did "dmraid -ay" and it was automatically activated and the partitions were created as /dev/dm-2 and /dev/dm-3 so I could mount them.
    If I try that now I get:
    Code:
    /dev/sdd: "jmicron" and "isw" formats discovered (using isw)!
    ERROR: isw: wrong number of devices in RAID set "isw_ceijhcahbb_TERARAOD" [1/2] on /dev/sdd
    RAID set "isw_ceijhcahbb_TERARAOD" was not activated
    ERROR: device "isw_ceijhcahbb_TERARAOD" could not be found
    But I tried creating a new array using
    Code:
    sudo mdadm --create /dev/md0 -l0 -n2 -c128 /dev/sde2 /dev/sdd2
    but the output I get is
    Code:
    mdadm: super1.x cannot open /dev/sdd2: No such file or directory
    mdadm: /dev/sdd2 is not suitable for this array.
    mdadm: create aborted
    so now it's having problem adding sdd2 :S
    am I doing it wrong?

    But like I said in the first post, as soon as I boot into linux, the second disk in the array is deactivated in the raid BIOS so I'm certain there is some underlying problem here.

    I don't know, I'm gonna get some sleep and hack around some more in the "morning" :P


    edit: stupid me, I seem to have permanently borked the array somehow. Thank god for backups...
    I created a new one and now mdadm seems to be automatically creating this device on boot called /dev/md127 1717.3GB of size and with no partition table.
    Last edited by raginaot; 12-06-2010 at 08:26 AM.

  4. $spacer_open
    $spacer_close
  5. #4
    Indeed, you use the fake hardware RAID of your motherboard. That's always delicate.

    To rebuild your array you should go with dmraid -R. Something like that:
    Code:
    dmraid -R isw_ceijhcahbb_TERARAOD /dev/sde2
    Maybe you also have to play around with your BIOS settings a bit. You can find more information here: [opensuse] Removing a failed disk from a dmraid array
    But I'm not an expert here at all so I won't be able to give you further advice if that doesn't work.
    Refining Linux Advent calendar: “24 Outstanding ZSH Gems

  6. #5
    IT WORKS

    I'm not entirely sure what I did, but I started by permanently ****ing up the array with some mdadm command, then I deleted it in bios, then I had to delete all the partitions in another windows machine cause the table was so majorly fubar linux thought one of the drives was several PB in size :S
    Then I created a new array, a few reboots, banged my head on the keyboard for a while and sent Linus Torvalds an angry letter and somehow it's working now.

    Cheers for trying to help manko. appreciate it

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •