Hi Forum, I just build a RAID5 software array with 4x 80GB disks.

I made on every member of the array a primary linux raid autodetect partition

and did a:
mdadm --create /dev/md0 --verbose --level=5 --raid-disks=4 /dev/hdc1 /dev/hde1 /dev/hdi1 /dev/hdk1 --spare-disks=0

and this is what I get after a #cat /proc mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [multipath]
read_ahead 1024 sectors
md0 : active raid5 hdk1[3] hdi1[2] hde1[1] hdc1[0]
234444288 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

looks good. BUT a: #mdadm --examine /dev/hdc1 shows:

/dev/hdc1:
Magic : a92b4efc
Version : 00.90.00
UUID : 1b9a9f5c:bba56893:71c1843f:b35b1d5a
Creation Time : Thu Mar 18 16:08:01 2004
Raid Level : raid5
Device Size : 78148096 (74.52 GiB 80.02 GB)
Raid Disks : 4
Total Disks : 5
Preferred Minor : 0

Update Time : Thu Mar 18 17:15:43 2004
State : dirty, no-errors
Active Drives : 4
Working Drives : 4
Failed Drives : 1
Spare Drives : 0
Checksum : 2ae4f2ca - correct
Events : 0.2

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDisk State
this 0 22 1 0 active sync /dev/hdc1
0 0 22 1 0 active sync /dev/hdc1
1 1 33 1 1 active sync /dev/hde1
2 2 56 1 2 active sync /dev/hdi1
3 3 57 1 3 active sync /dev/hdk1

Raid Disks : 4
Total Disks : 5
Working Drives : 4
Failed Drives : 1

WHY???? but if there were really 4 drives working and one failing #cat /proc/mdastat would not show 4 drives of 4 drives running. and the size of the array is 240GB (3x80GB + 1x80GB parity spreaded over all drives) and not 320GB what would match 5 RAID disks...

what is happening here? can someone help me?