All, I am currently building a platform that has two servers working in a active/standby mode. Both servers need to be able to see one volume and access this volume, the way that we are doing it is the following.

Two servers (RedHat ES4) running the ISCSI initiator talking to a SAN Array which holds the volume on it (100GB) The ISCSI initiator works just fine and the volume is a gfs volume which was created using the gfs_mkfs command. A script runs at startup to mount the volume in the following way
mount -t gfs -o lockproto=lock-dlm /dev/sdc /shared

This works fine on reboot and both servers can see and share the storage, however if the primary server fails then the secondary server loses connectivity to the volume and cant see the data anymore, this of course affectively means that the faoliver is not working at all and that the whole idea of having two servers is pointless.

Has anyone come across this type of thing before at all as I would really like to try and get this resolved, surely someone has used this type of thing before after all RedHat promote the use of clustered file systems so it must just work , I must just be missing something or doing something wrong.

HELP....


Thanks

Chris K