Server with 4 disk partitions in an RAID 5 array using md.

Yesterday the array failed with two devices showing as faulty. After rebooting from rescue, I was able to force the assembly and start the array and everything looks to be okay as far as the data goes, but when I run:

Code:
mdadm --examine /dev/sda3
I get (truncating to the interesting bits):

Code:
      Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 1448195216 (690.55 GiB 741.48 GB)
     Array Size : 4344585216 (2071.66 GiB 2224.43 GB)
  Used Dev Size : 1448195072 (690.55 GiB 741.48 GB)

    Array Slot : 0 (0, 1, 2, failed, 3)
   Array State : Uuuu 1 failed
And

Code:
mdadm --detail /dev/md1
yields:

Code:
     Array Size : 2172292608 (2071.66 GiB 2224.43 GB)
  Used Dev Size : 1448195072 (1381.11 GiB 1482.95 GB)
   Raid Devices : 4
  Total Devices : 4

 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
Additionally, the 'slot' for devices a-d line up like this:

a - 0
b - 1
c - 2
d - 4 (!)

The first number 'Array Size' from examine is twice as big as sit should be based on the output from detail and comparing to twinned server, and why does the 'Array State' and 'Array Slot' from examine indicate there's a 5th device that's not indicated anywhere else?

And how do I fix this?