Results 1 to 1 of 1
OK, so I recently got a cluster up and running on one node. I have run into two problems: 1. After I configured the cluster, luci forced the node to ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
- 07-09-2008 #1
- Join Date
- Jun 2008
mount.gfs: lock_dlm_join: gfs_controld error: -16/luci fails to start
1. After I configured the cluster, luci forced the node to restart. Upon restart, I was able to start cman, but was unable to start luci:
# service cman start
Starting cluster: [all services start.]
#service luci start
Starting luci: [FAILED]
What could be the problem here? I looked at cman_tool to verify that the node is part of the cluster (it is), but I just wanted to see it in luci. This isn't mentioned in any of the docs.
2. The impetus of having a cluster is so I can export a GFS partition using iSCSI. To do this, obviously I need a GFS partition. So I did:
# gfs_mkfs -t test-cluster:gfs1 -p lock_dlm -j 3 /dev/VolGroup00/LogVol03,
where LogVol03 is a logical volume I created while installing RHEL5.
This worked fine. However, when I tried to mount it, here's what happened:
# mount -t gfs /dev/VolGroup00/LogVol03 temp
/sbin/mount.gfs: lock_dlm_join: gfs_controld error: -16
/sbin/mount.gfs: error mounting lockproto lock_dlm
where temp (actually /mnt/temp, /mnt was the wd) is where I want to mount LogVol03. I was googling extensively and found somewhere that the new GFS kernel modules rpm (kmod-gfs-0.1.23-5.el5.x86_64.rpm) should fix this problem, but it didn't. I also moved the /lib/modules/<old_kernel_dir>/extra/gfs and gnbd directories into /lib/modules/<new_kernel_dir>/extra, and when that didn't work, into /lib/modules/<new_kernel_dir>/kernel/fs. What's the problem here?