Installing A High Availability Web Server Cluster On Ubuntu 12.10 Using HAProxy, HeartBeat And Lampp
What is the main objective of this entire topology?

What is this going to solve?
#vi /etc/hosts 192.168.0.241 haproxy
192.168.0.39 Node1
192.168.0.30 Node2
192.168.223.147 Node1
192.168.223.148 Node2
192.168.0.58 Web1
192.168.0.139 Web2
192.168.0.132 Mysql
That’s right. We are going to use a very handsome application named “heartbeat” as our first level redundancy, it will be responsible for keeping our HAProxy (our “Reliable, High Performance TCP/HTTP Load Balancer”) redundant. Don’t you agree that having only ONE load balancer means there’s a single point of failure? Well, redundancy is all about overthrowing this statement, we shall never have a single point of failure, and that’s why we are not deploying a single HAProxy.
So… how does this “heartbeat” work? It is very simple, indeed. Look at our topology (first picture). We’re setting up heartbeat to monitor two servers, one of them is called “master” and the other one “slave”, and they are constantly exchanging their status information through eth1 (10.100.100.0/32), a dedicated point-to-point network. Basically, they tell each other: “I’m up! I’m up! I’m up!…” and whenever a node stop listening this, it will act as the new master by adopting what we call the “virtual IP address” (or VIP). By the way, the VIP is the address that your user is going to use whenever he wants to reach the web server. Check out the illustration on the right.
Setting up the heartbeat is easy, after the installation you will need to create those files on both servers (Node1 and Node2):
# vi /etc/ha.d/ha.cf debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 1 deadtime 5 warntime 10 initdead 15 udpport 694 bcast eth1 auto_failback on node Node1 node Node2
# vi /etc/ha.d/haresources Node1
IPaddr::192.168.0.241/24/eth0 lampp # vi /etc/ha.d/authkeys auth 3 3 md5 polaris
#chmod 600 /etc/ha.d/authkeys
#vi /etc/haproxy/haproxy.cfg
listen web-cluster 192.168.0.241:80
mode http
stats enable
stats auth admin:polaris # Change this to your own username and password!
balance roundrobin
option httpclose
option forwardfor
cookie JSESSIONID prefix
server web1 192.168.0.58:80 cookie A check
server web2 192.168.0.139:80 cookie B check
#vi /etc/default/haproxy
ENABLED=0
ENABLED=1
# vi /etc/ha.d/ha.cf debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 1 deadtime 5 warntime 10 initdead 15 udpport 694 bcast eth1 auto_failback on node Node1 node Node2
# vi /etc/ha.d/haresources< Node1
IPaddr::192.168.0.241/24/eth0 lampp
# vi /etc/ha.d/authkeys auth 3 3 md5 polaris
#chmod 600 /etc/ha.d/authkeys
#vi /etc/haproxy/haproxy.cfg
listen web-cluster 192.168.0.241:80
mode http
stats enable
stats auth admin:polaris # Change this to your own username and password!
balance roundrobin
option httpclose
option forwardfor
cookie JSESSIONID prefix
server web1 192.168.0.58:80 cookie A check
server web2 192.168.0.139:80 cookie B check
#vi /etc/default/haproxy
ENABLED=0
ENABLED=1
#/etc/init.d/heartbeat start #/etc/init.d/haproxy start
(cannot bind socket)
on haproxy-02, this happened because the kernel is not ready (by default) to bind this kind of socket to a remote machine. By this moment, the virtual IP address belongs to the master node (haproxy-01), this why haproxy-01 started the haproxy without any problems. The solution lies here:#echo “net.ipv4.ip_nonlocal_bind=1” >> /etc/sysctl.conf #sysctl -p