Results 1 to 2 of 2
I am trying to connect two system (let's say for time being) together such that it supports clustering. for this I got the following packages: Code: # sudo apt-get install ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
- 07-04-2012 #1
- Join Date
- Jul 2012
bash:umount/: no such file or directory
I am trying to connect two system (let's say for time being) together such that it supports clustering.
for this I got the following packages:
# sudo apt-get install pacemaker sysv-rc-conf glusterfs-server glusterfs-examples glusterfs-client chkconfig nmap ntp
node[x]:~# mkfs.ext3 /dev/sd?? node[x]:~# blkid -g node[x]:~# blkid /dev/sd?? >> /etc/fstab node[x]:~# vi /etc/fstab you must have a line like that : UUID=9dc20d6c-a3d7-4667-a9b1-e8939a0473f1 /export ext3 defaults 0 2 node[x]:~# mount /export/ node[x]:~# mkdir /export/part1
Here's the contents of the two files, for the second system ip addresses may change though.
this is for glusterd.vol
volume management type mgmt/glusterd option working-directory /etc/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 end-volumethe above lines are defaults, i mean, they're the only lines which are included in this file when you installed glusterfs package, I commented them first as I thought they wouldn't be needed, but then I got the endpoint connection error when tried to mount /mnt/glusterfs.
volume VolNode1 type protocol/client option remote-subvolume brick option transport-type tcp/client option remote-host 192.168.3.13 # system 1 IP option remote-port 820 option transport.socket.nodelay on end-volume volume VolNode2 type protocol/client option remote-subvolume brick option transport-type tcp/client option remote-host 192.168.3.99 # system 2 IP option remote-port 820 option transport.socket.nodelay on end-volume volume afr type cluster/replicate subvolumes VolNode2 VolNode1 option read-subvolume VolNode2 end-volume volume wb type performance/write-behind subvolumes afr option cache-size 4MB end-volume volume cache type performance/io-cache subvolumes wb option cache-size 1024MB option cache-timeout 60 end-volume
- 07-20-2012 #2
- Join Date
- Apr 2009
- I can be found either 40 miles west of Chicago, or in a galaxy far, far away.
Why not just use NFS to share the folders and/or file systems? Sorry, but I've not used glusterfs so I can't really help, but sometimes when one approach isn't working, trying another may solve the immediate problem.Sometimes, real fast is almost as good as real time.
Just remember, Semper Gumbi - always be flexible!