Find the answer to your Linux question:
Results 1 to 7 of 7
Hi all, I have a proxy server installed in RHEL 5.4. Now I want to create a cluster for this when one proxy server fails(due to maintenance or due to ...
Enjoy an ad free experience by logging in. Not a member yet? Register.
  1. #1
    Just Joined!
    Join Date
    Oct 2010
    Location
    Hyderabad
    Posts
    7

    Cluster for Squid proxy server


    Hi all,

    I have a proxy server installed in RHEL 5.4. Now I want to create a cluster for this when one proxy server fails(due to maintenance or due to power prob), the second one should serve the user requests. I have no idea about how to create cluster in linux. I have tried searching in google also but I didnt understand anything. Hence, if anybody can help me in this, it would be great. I have been trying for this from long time, but not able to do successfully. I ll just shutdown first server, and the second one should work. Thats how I am planning to test whether cluster is working fine or not. Can anyone help me in this way please.....

  2. #2
    Just Joined!
    Join Date
    Oct 2011
    Posts
    50
    you will need to share one IP between the 2 servers and use heartbeat to monitor and switch the IP between the 2 servers when one is down

  3. #3
    Just Joined!
    Join Date
    Oct 2010
    Location
    Hyderabad
    Posts
    7
    Hi,vasile002. Thank you for sharing your reply, but can you please give me the complete step to step installation please....

  4. #4
    Trusted Penguin Irithori's Avatar
    Join Date
    May 2009
    Location
    Munich
    Posts
    3,345
    Frankly, that is a bit much to ask.
    You can start here:
    LVS Documentation
    You must always face the curtain with a bow.

  5. #5
    Trusted Penguin Irithori's Avatar
    Join Date
    May 2009
    Location
    Munich
    Posts
    3,345
    Another approach would be to deploy 2 or more squid machines with no automatic failover.
    Then let the browser choose the proxy via a proxy.pac file
    ProxyPACFiles.com - The Practical Proxy PAC file guide

    This might not be possible in all usecases.
    Also: as part of the config is now on each client machine, troubleshooting might be harder.
    You must always face the curtain with a bow.

  6. #6
    Just Joined!
    Join Date
    Oct 2011
    Posts
    50
    the first thing you will need is to get an IP that you can switch between the 2 servers. The servers will need to be in the same room and the ISP will need to make that change possible.

    as to heartbeat, the default configs are fine and you can add only the IP switch to the haresources file

    just read the official heartbeat docs

  7. #7
    Just Joined!
    Join Date
    Jun 2012
    Posts
    1

    High Avability Cluster SOP

    Quote Originally Posted by chaitanyakumar View Post
    Hi all,

    I have a proxy server installed in RHEL 5.4. Now I want to create a cluster for this when one proxy server fails(due to maintenance or due to power prob), the second one should serve the user requests. I have no idea about how to create cluster in linux. I have tried searching in google also but I didnt understand anything. Hence, if anybody can help me in this, it would be great. I have been trying for this from long time, but not able to do successfully. I ll just shutdown first server, and the second one should work. Thats how I am planning to test whether cluster is working fine or not. Can anyone help me in this way please.....


    HIGH AVALIBLITY CLUSTER for Tranparent Proxy:
    High Avaliabilty Cluster for Transparent Proxy

    Primary Server: node1 IP:192.168.2.145
    Secondary Server: node2 IP:192.168.2.146

    node1
    Do each and every thing on both server but in some places where slave(node2) need some changes so that is mentioned.
    #vi /etc/hosts
    192.168.2.145 node1
    192.168.2.146 node2
    :wq!
    #rpm -qa | grep ntp
    #vi /etc/ntp.conf

    server 0.centos.pool.ntp.org
    server 1.centos.pool.ntp.org
    server 2.centos.pool.ntp.org

    server 127.127.1.0
    #fudge 127.127.1.0 stratum 10
    :wq!


    node2
    #vi /etc/ntp.conf
    server 192.168.2.145
    #server centos.pool.ntp.org
    #server centos.pool.ntp.org
    #server2.centos.pool.ntp.org
    #server 127.127.1.0
    #fudge 127.127.1.0 stratum 10
    :wq!

    #/etc/init.d/ntpd restart
    #date
    #watch ntpq -p -n
    #fdisk -l

    Make partition for DRBD replication
    #fdisk /dev/sdb
    Command (m for help): m
    Command (m for help): n
    P
    Partition number (1-4): 1
    Cyclinder or Size:Enter for Default
    Command (m for help): p
    Command (m for help): t
    Selected partition 1
    Hex code (type L to list codes):8e
    Command (m for help): p
    It will show partition like that

    Disk /dev/sdb: 8589 MB, 8589934592 bytes
    255 heads, 63 sectors/track, 1044 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot Start End Blocks Id System
    /dev/sdb1 1 1044 8385898+ 8e Linux LVM


    Command (m for help): w #partprobe

    #pvcreate /dev/sdb1
    #vgcreate vgdrbd /dev/sdb1
    #lvcreate -n lvdrbd /dev/mapper/vgdrbd -L +8000M
    #lvdisplay | more
    #vgdisplay | more
    #vi /etc/sysctl.conf
    net.ipv4.conf.eth0.arp_ignore = 1
    net.ipv4.conf.all.arp_announce = 2
    net.ipv4.conf.eth0.arp_announce = 2
    :wq!
    #sysctl -p

    DRBD Installation
    #yum install drbd82 kmod-drbd82
    #cp /usr/share/doc/drbd82/drbd.conf /etc/drbd.conf.org
    #vi /etc/drbd.conf

    global {
    usage-count yes;
    }



    common {
    syncer { rate 10M; }
    }


    resource r0 {
    protocol C;
    handlers {
    pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
    pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
    local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
    outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
    }

    startup {
    }

    disk {
    on-io-error detach;
    }

    net {
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
    }

    syncer {
    rate 10M;
    al-extents 257;
    }

    on node1 {
    device /dev/drbd0;
    disk /dev/vgdrbd/lvdrbd;
    address 192.168.2.145:7788;

    meta-disk internal;
    }

    on node2 {
    device /dev/drbd0;
    disk /dev/vgdrbd/lvdrbd;
    address 192.168.2.146:7788;
    meta-disk internal;
    }

    }

    #modprobe drbd
    #echo "modprobe drbd" >> /etc/rc.local
    #drbdadm create-md r0
    #groupadd haclient
    #chgrp haclient /sbin/drbdsetup
    #chmod o-x /sbin/drbdsetup
    #chmod u+s /sbin/drbdsetup
    #chgrp haclient /sbin/drbdmeta
    #chmod o-x /sbin/drbdmeta
    #chmod u+s /sbin/drbdmeta
    #drbdadm attach r0
    #chgrp haclient /sbin/drbdsetup
    #chmod o-x /sbin/drbdsetup
    #chmod u+s /sbin/drbdsetup
    #chgrp haclient /sbin/drbdmeta
    #chmod o-x /sbin/drbdmeta
    #chmod u+s /sbin/drbdmeta
    #drbdadm syncer r0
    #drbdadm connect r0

    This cmd is just for primary not for slave
    #drbdadm -- --overwrite-data-of-peer primary r0

    Now on both servers
    #drbdadm up all

    Now this cmd just for primary/master server
    #drbdadm -- primary all

    Now for Both server
    #watch cat /proc/drbd
    #mkfs.ext3 /dev/drbd0
    #mkdir /data/
    #mount /dev/drbd0 /data/
    #df -h
    Heart Installation
    #yum install heartbeat heartbeat-pils heartbeat-stonith heartbeat-devel
    #vi /etc/ha.d/ha.cf

    #logfacililty local0
    keepalive 2
    #deadtime 30# USE this !!!
    deadtime 10
    #we use two heartbeat links, eth0 and serial0
    bcast eth0 ###we can use eth1 instead of eth0 it's better option####
    #serial /dev/ttyS0
    baud 19200
    auto_failback on
    node node1
    node node2
    :wq!
    #vi /etc/ha.d/haresources
    node1 IPaddr::192.168.2.190/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 squid
    :wq!

    #vi /etc/ha.d/authkeys
    auth 3
    3 md5 passwd
    :wq!
    #chmod 600 /etc/ha.d/authkeys
    #chkconfig --level 235 heartbeat on
    give error so
    #yum install -y heartbeat
    #chkconfig --level 235 heartbeat on
    #service drbd status
    #echo 1 > /proc/sys/net/ipv4/ip_forward
    #cat /proc/sys/net/ipv4/ip_forward
    #yum install squid
    #vi /etc/squid/squid.conf
    acl our_networks src 192.168.2.0/24
    http_access allow our_networks
    cache_dir ufs /data/squid 100 16 256
    http_port 3128 transparent
    :wq!

    #cd /data ##on slave it will be created automatically due to DRBD
    #mkdir squid ##on slave it will be created automatically due to DRBD
    #chown squid:squid squid
    #squid -z
    #iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j REDIRECT --to-port 3128
    #iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    #/etc/init.d/iptables save

    #/etc/init.d/heartbeat restart

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •