How to: highly available load balancer - HAProxy with Keepalived

webmastah_net

Mirek @ webmastah.net

Posted on December 13, 2020

How to: highly available load balancer - HAProxy with Keepalived

In this article I will guide you through the process of installing and configuring a highly available load balancer. But before we start, we have some names and concepts to decode, which may still be unknown to a beginner.

Let's start with what we want to achieve. We do not have to let our imagination run wild: with current web applications it is quite natural to separate traffic into more than one machine.

Let's assume that we have an application that can no longer handle the traffic on a single server (e.g., there is already quite a large and expensive VPS instance) and it's time to split some of its services into several servers. Let it be a web server (nginx, apache, whatever). And here we meet the first problem: how to split this traffic flow into more servers?

The answer is the load balancer, a service that can juggle traffic between machines according to the criteria set. There are many possibilities to choose a load balancer (e.g., hardware or often in cloud services as a separate service), even the popular Nginx web server has such a possibility. But we'll take care of one of the most popular software in this area, HAProxy.

OK, we have our servers, we have a load balancer in front, but wait a minute, if this load balancer crashes, we won't need those many web servers. After all, this is a classic SPoF (“single point of failure”). And where is high availability here?

And here we come to the second component of our puzzle, Keepalived. How will it help us? We will duplicate the instances with the load balancer, i.e., we will have two load balancers, and thanks to Keepalived we will be able to switch between them if one of them falls. Cool? Yes 😉

What do we need?

We will need to make our puzzle:

  • two instances of the loadbalancer (HAProxy)

    • lb1.webmastah.dev - 192.168.0.100
    • lb2.webmastah.dev - 192.168.0.101
  • two instances for a web server (nginx)

    • web1.webmastah.dev - 192.168.0.102
    • web2.webmastah.dev - 192.168.0.103
    • virtual IP (also known as floating IP, ask your VPS service provider about it), which will be plugged into one of the load balancer instances - 192.168.0.99

The whole thing will eventually look like this:

Haproxy

WEB servers

Let's start with the simplest thing — preparing web servers. There is no philosophy here, we just install nginx on both servers. You can add “WEB 1” / “WEB 2” to the default page displayed by nginx to make the tests easier — you can see which server is responding to your request.

LB servers

Install HAProxy and go to /etc/haproxy/haproxy.cfg for edit.

Two sections of this configuration are crucial: frontend and backend settings. By frontend, we mean the traffic that enters HAProxy, and by backend we mean the traffic that is pushed to the given web servers.

defaults
    log     global
    mode    tcp
    option  tcplog

frontend www
    bind 192.168.0.99:80
    default_backend nginx

backend nginx
    balance roundrobin
    mode tcp
    server web1 192.168.0.102:80 check
    server web2 192.168.0.103:80 check
Enter fullscreen mode Exit fullscreen mode

In the frontend section we set our main IP where there will be incoming traffic and which IP will be interchanged between the main and backup load balancer if necessary. In the backend section we put our WEB servers.

HAProxy's configuration is a topic for a completely different article, above I've focused only on the part necessary to do this tutorial, but worth adding, that we can add in the global section: option dontlog-normal which will only log errors, and: option log-health-checks which will help to catch (log) stability problems (more about this topic can be found here: Performing Health Checks).

We restart HAProxy and check if it works and if everything is OK in logs. From now on, when we run our main IP we should see the pages served by WEB servers. If we have previously marked on each nginx what server number it is, we can now see how they are randomly response (they don't have to be random, there are different mechanisms of traffic separation, but this is a separate topic).

For HAProxy to be able to assign to our main IP on the second machine (the one that is currently backup) we need to allow this action by adding net.ipv4.ip_nonlocal_bind=1 in /etc/sysctl.conf (after adding the entry reload sysctl -p).

At this stage, it is also worth making sure that HAProxy boots itself after rebooting. This is dependent on your system, for Debian or Ubuntu you need an edit /etc/default/haproxy and add ENABLED=1, for CentOS 6 chkconfig haproxy on, CentOS 7 systemctl enable haproxy.

Keepalived

Okay, we already have a load balancer, in that case we have to repeat the above steps to configure LB2 in the same way. And now we can move on to the mechanism that will switch to load balancer from LB2 in case of problems with LB1.

On both servers we need to configure Keepalived. Edit /etc/keepalived/keepalived.conf, on LB1 (MASTER) it looks like this:

vrrp_script chk_haproxy {
    script "killall -0 haproxy"   # verify the pid existance
    interval 2                    # check every 2 seconds
    weight 2                      # add 2 points of prio if OK
}

vrrp_instance VI_1 {
    interface eth0                # interface to monitor
    state MASTER
    virtual_router_id 51          # Assign one ID for this route
    priority 101                  # 101 on master, 100 on backup
    virtual_ipaddress {
        192.168.0.99              # the virtual IP
    }
    track_script {
        chk_haproxy
    }
}
Enter fullscreen mode Exit fullscreen mode

On LB2 (SLAVE) we modify two variables: priority and state:

vrrp_script chk_haproxy {
    script "killall -0 haproxy"   # verify the pid existance
    interval 2                    # check every 2 seconds
    weight 2                      # add 2 points of prio if OK
}

vrrp_instance VI_1 {
    interface eth0                # interface to monitor
    state BACKUP
    virtual_router_id 51          # Assign one ID for this route
    priority 100                  # 101 on master, 100 on backup
    virtual_ipaddress {
        192.168.0.99              # the virtual IP
    }
    track_script {
        chk_haproxy
    }
}
Enter fullscreen mode Exit fullscreen mode

How does it work? It's simple: Keepalived demons check each other to see if the other side is still “alive”. In case the SLAVE server finds out that the MASTER is not responding, it assigns our main IP to the working server, taking over all the traffic. Of course, LB1 and LB2 must be in the network where multicast is running.

Let's check this: let's stop HAProxy on LB1 and after a few seconds our traffic should already be served from LB2. Magic!

In this situation our original diagram changes to:

Final architecture

You can view the communication of Keepalived demons through: tcpdump -i <interface to which Keepalived> vrrp -c 10 -v.

Summary

As you can see in a simple way up to an hour we can set up and configure a multi-server high availability environment. Nowadays, where we have a lot of “VPS in cloud” services with a basic package for $5, we can run pretty good scaled production environment at a very low cost. By adding some automation in the configuration and in setting up new VPS instances, we can maintain a very flexible and efficient environment for really little money, which, contrary to appearances, may be enough for a very, very long development of our project. It has never been so easy and so cheap!

💖 💪 🙅 🚩
webmastah_net
Mirek @ webmastah.net

Posted on December 13, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related