Orchestrating Highly Available Load Balancers with OpenStack Heat

Want to learn more about OpenStack Orchestration? Check out our upcoming class in Pune, India, and sign up today!

Currently, OpenStack does not give you the ability to create highly available load balancers with LBaaS. Spawning a pair of highly available virtual machines is not possible without using Neutron extensions, and certainly you cannot do it from the OpenStack Dashboard.

How can we create a pair of highly available load balancers such that if one of them dies, the surviving one will still be available to do actual load balancing?

In this example, we will see how to orchestrate a pair of highly available load balancers with OpenStack Heat.

In the Heat template, apart from providing all the necessary properties required by the OS::Nova::Server resource to create a server, we also have to specify a Virtual IP Address (VIP) which can float between the two servers - for instance, Server1 and Server2.  We then have to let the user_data section handle the software installation. In this example we will install haproxy and keepalived on both of these servers.

                 10.0.0.2
(Shared IP address)
  -------+----------|---------+------
         |                    |
    +----+----+          +----+----+
    | Server1 |          | Server2 |
    +---------+        +---------+
     10.0.0.5             10.0.0.6
      haproxy              haproxy
     keepalived          keepalived

The problem of having a shared ip address which can float between the two servers can be solved by using the “allowed-address-pairs” extension provided by Neutron. allowed-address-pairs allows one to add additional IP/MAC address pairs on a port, so as to allow traffic that matches the specified values. (For more information, please have a look at this commit: https://review.openstack.org/#/c/38230/.)

In the parameter section of our template, we will specify a parameter called “shared_ip”. This shared_ip will act as the VIP (once again, Virtual IP address) for this pair of highly available load balancing servers.

parameters:

 shared_ip:

   type: string

   description: Fixed ip of the extra port which will be used by keepalived.

   default: 10.0.0.2


In the user_data section for Server1, we will download and configure keepalived and haproxy.  Server1 will be configured as MASTER with priority 100, and $shared_ip will be the VIP managed by keepalived. The $shared_ip will appear as the second ip address on eth0 of Server1. In the event of Server1 going down,  $shared_ip will fail over to the “BACKUP”,Server2 in this example, with priority 50 and state “BACKUP”.

For Server1, the user_data section should look like this:


user_data:

   str_replace:

     template: |

       #!/bin/bash

       sudo -i

       yum install keepalived -y

       rm -rf /etc/keepalived/keepalived.conf

       echo '

       vrrp_instance vrrp_group_1 {

         state MASTER

         interface eth0

         virtual_router_id 1

         priority 100

         authentication {

          auth_type PASS

          auth_pass password

         }

         virtual_ipaddress {

          $shared_ip/24 brd 10.0.0.255 dev eth0

         }

        }

       ' >> /etc/keepalived/keepalived.conf

       service keepalived restart

       

      #Install and configure haproxy

      yum install haproxy -y

        rm -rf /etc/haproxy/haproxy.cfg

        echo '

        global

            log 127.0.0.1 local0 debug

            maxconn   45000 # Total Max Connections. This is dependent on ulimit

            user haproxy

            group haproxy

            daemon


        defaults

                log global

                mode http

                option dontlognull

                retries 3

                option redispatch

                timeout server 86400000

                timeout connect 86400000

                timeout client 86400000

                timeout queue   1000s


        listen  http_web 0.0.0.0:80

               mode http

               stats enable

                stats uri /haproxy?stats

                option httpclose

                option forwardfor

                balance roundrobin # Load Balancing algorithm

                reqadd X-Forwarded-Proto:\ http

                server Alice 192.168.122.111 check

                server Bob 192.168.122.112 check

 

listen  stats   $shared_ip:1936

                mode            http

                log             global

                maxconn      10

                clitimeout      100s

                srvtimeout      100s

                contimeout      100s

                timeout queue   100s

                stats enable

                stats hide-version

                stats refresh 30s

                stats show-node

                stats auth admin:password

                stats uri  /haproxy?stats ' >> /etc/haproxy/haproxy.cfg


       service haproxy restart

       echo 'http://$host/' >> /root/host_url.txt

     params:

       $shared_ip: { get_param: shared_ip }

       $host: { get_attr: [ extra_floating_ip, floating_ip_address ] }


The user_data section for Server2 should look like this:


user_data:

   str_replace:

     template: |

       #!/bin/bash

       sudo -i

       yum install keepalived -y

       rm -rf /etc/keepalived/keepalived.conf

       echo '

       vrrp_instance vrrp_group_1 {

         state BACKUP

         interface eth0

         virtual_router_id 1

         priority 50

         authentication {

          auth_type PASS

          auth_pass password

         }

         virtual_ipaddress {

          $shared_ip/24 brd 10.0.0.255 dev eth0

         }

        }

       ' >> /etc/keepalived/keepalived.conf

       service keepalived restart

 

     #Install and configure haproxy

      yum install haproxy -y

        rm -rf /etc/haproxy/haproxy.cfg

        echo '

        global

            log 127.0.0.1 local0 debug

            maxconn   45000 # Total Max Connections. This is dependent on ulimit

            user haproxy

            group haproxy

            daemon


        defaults

                log global

                mode http

                option dontlognull

                retries 3

                option redispatch

                timeout server 86400000

                timeout connect 86400000

                timeout client 86400000

                timeout queue   1000s


        listen  http_web 0.0.0.0:80

               mode http

               stats enable

                stats uri /haproxy?stats

                option httpclose

                option forwardfor

                balance roundrobin # Load Balancing algorithm

                reqadd X-Forwarded-Proto:\ http

                server Alice 192.168.122.111 check

                server Bob 192.168.122.112 check


listen  stats   $shared_ip:1936

                mode            http

                log             global

                maxconn      10

                clitimeout      100s

                srvtimeout      100s

                contimeout      100s

                timeout queue   100s

                stats enable

                stats hide-version

                stats refresh 30s

                stats show-node

                stats auth admin:password

                stats uri  /haproxy?stats ' >> /etc/haproxy/haproxy.cfg


       service haproxy restart

       echo 'http://$host/' >> /root/host_url.txt

     params:

       $shared_ip: { get_param: shared_ip }

       $host: { get_attr: [ extra_floating_ip, floating_ip_address ] }

 

At this point, we need an additional port with a corresponding floating IP. The reason we need this additional port is to be able to include the shared_ip address as an allowed-address-pair Server1's and Server2's ports, so that they will be able to send out traffic from this address. We will use the “ip_address” property of OS::Neutron::Port resource to assign a “fixed” IP address to this “extra_port”. The value for this property is passed as a parameter, i.e. “shared_ip”, in the parameter section, with the effect that you can always define the fixed IP directly from the OpenStack dashboard. This fixed IP will float between the two NICs of both servers, showing up as as the second IP address on the MASTER, which is Server1.  If we stop keepalived on Server1, then this “shared_ip” will float to Server2 and it will be seen as the second ip on its eth0.

However, if we restart keepalived on Server1, then the “fixed IP” will again appear on Server1 because of its greater priority: 100 for Server1, versus 50 for Server2.

With that in mind, let's create an additional port in the template:

extra_port:

  type: OS::Neutron::Port

  properties:

    network_id: { get_param: private_net_id }

    fixed_ips:

      - ip_address: { get_param: shared_ip }


This floating IP will map to our so-called “fixed IP”, which is acting as a VIP for Server1 and Server2.

extra_floating_ip:

  type: OS::Neutron::FloatingIP

  properties:

    floating_network_id: { get_param: public_net_id }

    port_id: { get_resource: extra_port }

With the above we created an extra_port with a fixed IP address. How do we map this to the port already present in Server1?  By using the “allowed_address_pairs” property of the “OS::Neutron::Port” resource, as mentioned above.  And as shown in the example below, “shared_ip” will appear as the second ip address on server1_port:

server1_port:

  type: OS::Neutron::Port

  properties:

    network_id: { get_param: private_net_id }

    allowed_address_pairs:

      - ip_address: { get_param: shared_ip }

    fixed_ips:

      - subnet_id: { get_param: private_subnet_id }


server1_floating_ip:

  type: OS::Neutron::FloatingIP

  properties:

    floating_network_id: { get_param: public_net_id }

    port_id: { get_resource: server1_port }

The same will be the case with Server2: “shared_ip” will appear as the extra IP on Server2's port.

server2_port:

  type: OS::Neutron::Port

  properties:

    network_id: { get_param: private_net_id }

    allowed_address_pairs:

      - ip_address: { get_param: shared_ip }

    fixed_ips:

      - subnet_id: { get_param: private_subnet_id }


server2_floating_ip:

  type: OS::Neutron::FloatingIP

  properties:

    floating_network_id: { get_param: public_net_id }

    port_id: { get_resource: server2_port }

The actual IP address of the floating IP associated with “shared_ip” will be written to a file named "host_url.txt".  It will reside in the /root directory of both load balancers.

Creating a pair of highly available load balancers inside an OpenStack cloud is tricky but it is completely doable. Neutron’s allowed-address-pairs extension allows us to create a pair of highly available virtual machines, and then we can install haproxy on them to do the actual load balancing. OpenStack Heat enables us to automate this rather complex process, and the good thing is that you can do it from the OpenStack Dashboard itself. You have to only pass the required Heat template and all necessary parameters from the OpenStack Dashboard to spawn a pair of highly available load balancers in a matter of a few seconds.

Want to learn more about this? There are two talks up for the upcoming OpenStack Summit that you can vote for!

 

References:

[1] http://docs.openstack.org/developer/heat/template_guide/

[2] http://blog.aaronorosen.com