Results 1 to 2 of 2
Enjoy an ad free experience by logging in. Not a member yet? Register.
raid6: Disk failure on sdb1, disabling device.
sd 9:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 9:0:0:0: [sdb] Sense Key : Illegal Request [current] sd 9:0:0:0: [sdb] Add. Sense: Invalid field in cdb end_request: I/O error, dev sdb, sector 2930271935 end_request: I/O error, dev sdb, sector 2930271935 md: super_written gets error=-5, uptodate=0 raid5: Disk failure on sdb1, disabling device. raid5: Operation continuing on 3 devices.
mdadm /dev/md2 --fail /dev/sdb1 mdadm /dev/md2 --remove /dev/sdb1 mdadm /dev/md2 --add /dev/sdb1
In other words, my RAID set fails every so many mounts, but I know how to fix it.
What I want to find out is:
- Why is a random drive failed every now and then?
- How can I prevent that drive from being failed?
BTW: I have a second RAID set that has been functioning for years without error, so the setup I use must be correct. The only difference between both RAID arrays is the different vendor of the disks.
The problem is that the drive is automatically going into power save mode and spin up sometimes takes too long for md to believe the drive is still available. It therefore fails the drive.
To work around this, I created a small logical volume on the raid set, to which i write a few kB every five minutes from cron. This write is really fast and it simply prevents the drives from spinning down.
Would still be nice though to be able to change the timeout setting, it is currently set to somewhere around 10 seconds ...