1. Introduction
  2. What is HA Proxy?
  3. Health Checks for MySQL
  4. Failure Detection - how does it work?
  5. What are the benefits of using HAProxy?
  6. What happens when a database node fails?
  7. Redundancy of HAProxy
  8. Setting up HAProxy
  9. Integration with ClusterControl


1. Introduction

Applications would typically connect to a database cluster by opening connections on one of the nodes in order to run transactions. If the database node fails, the client would need to reconnect to another database node before it can continue to serve requests.

There are different ways to provide connectivity to one or more database servers. One way is to use a database driver that supports connection pooling, e.g.

JDBC driver:


PHP driver: http://php.net/manual/en/book.mysqlnd-ms.php
However, in Galera Cluster for MySQL, the JDBC and PHP drivers are not aware of internal Galera state information. For instance, a Galera donor node is read-only while it is helping another node resynchronize. 
Another solution is to use a load balancer between the clients and the database cluster.


2. What is HA Proxy?

HAProxy stands for High Availability Proxy, and is a great TCP/HTTP load balancer. It distributes a workload across a set of servers to maximize performance and optimize resource usage.

A front-end application that relies on a database backend can easily over-saturate it with too many concurrent running connections. HAProxy provides queuing and throttling of connections towards one or more MySQL Servers and prevents a single server to become overloaded with too many requests.

All clients connect to the HAProxy instance, and the proxy forwards the connection to one of the available MySQL Servers based on the load-balancing scheme used.

HAProxy allows for different load-balancing schemes, e.g. roundrobin or leastconn. In the Severalnines HAProxy deploy scripts, we use the leastconn scheme so that the database server with the lowest number of connections receives the connection. If the database servers have the same number of connections, then roundrobin is performed to ensure that all servers are used.


3. Health Checks for MySQL

HAProxy determines if a server is available for request routing by performing so called health checks. By default, a health check consists of trying to establish a TCP connection to the server.

This is usually not enough for a robust setup with a database backend since the database server itself could be in a non-operational state while still being able to respond to connection requests.

Severalnines provides an HTTP request based health check which determines whether a MySQL Server is available or not by carefully examining its internal state which depends on the clustering solution used.


4. Failure Detection – how does it work?

There are several user definable parameters that determine how fast HAProxy will be able to detect that a server is not available.

As default, the interval between health checks for a server that is "up", transitionally "up or down" or not yet checked is 2 seconds. The interval is 5 seconds when the server is 100% down or unreachable.

A server will be considered down/unavailable after 3 consecutive unsuccessful health checks, and will be operational again after 2 successful health checks.


5. What are the benefits of using HAProxy?

This setup simplifies a number of things:

  • All applications access the cluster via one single IP. The topology of the database cluster is masked behind HAProxy.
  • MySQL Connections are load-balanced between available DB nodes
  • It is possible to add or remove database nodes without any changes to the applications.
  • Once the max number of database connections (in MySQL) is reached, HAProxy queues additional new connections. This is a neat way of throttling database connection requests and achieves overload protection.


6. What happens when a database node fails?

When a database node fails, the database connections that have been opened on that node will also fail. It is important that HAProxy does not redirect new connection requests to the failed node.

It is possible to have HAProxy check that a server is up, but just making a connection to the MySQL port (usually 3306) is not good enough. The instance might be up, but the underlying storage engine might not be working properly. There are specific checks to be done, depending on whether the clustering type is Galera or MySQL Cluster.

Therefore, we use custom scripts, managed through the xinetd daemon, to make HAProxy check the status of a MySQL server. If a MySQL server is not available, it is automatically excluded from the list of available servers. From the HAProxy administration interface it is also possible to manually take out a server from the connection pool, e.g., for maintenance or for testing and validating new configuration parameters or optimized queries, before bringing it back into production.


7. Redundancy of HAProxy

Since all applications will be depending on HAProxy to connect to an available database node, one can have multiple HAProxy instances to avoid a single point of failure.

One possible setup is to install an HAProxy on each web server (or application server making requests on the database). This works fine if there are only a few web servers, so as the load introduced by the health checks is kept in check. The web server would connect to the local HAProxy (e.g. making a mysql connection on, and can access all the database servers. The Web and HAProxy together forms a working unit, so the web server will not work if it's HAProxy is not available.

It is also possible to set up 2 HAProxy instances in Active-Passive mode (using Keepalived and Virtual IP). Fail-over would then be automatic. This can be set up using the HAProxy deploy scripts.


8. Setting up HAProxy

Severalnines has created deployment scripts for HAProxy. These are available on GitHub.

Detailed information on how to use the scripts to install HAProxy and Keepalived with Virtual IP can be found on this blog.

The scripts work with any Galera or MySQL Cluster setup that has been deployed using the Severalnines Configurator.

The HAProxy scripts should be cloned into the install directory of the existing deployment package on the ClusterControl server. (The deployment package generated at www.severalnines.com/config). The HAProxy deploy command is executed from the ClusterControl server to install HAProxy on any host.

E.g., the following installs HAProxy on host, using Galera on a Debian platform:

./install-haproxy.sh debian galera

By default, the HAProxy server will listen for connections on port 33306. In the example above, you can connect your application server(s) to, and requests will be load balanced on the MySQL servers.

Do not forget to GRANT access from the HAProxy server to the MySQL Servers, because the MySQL Servers will see the HAProxy making the connections, not the Application server(s) itself. In the example above, issue on the MySQL Servers the access rights you wish:

GRANT select ON mydb.* TO 'appuser'@'' IDENTIFIED BY 'password';

As discussed above, the HAProxy instance will be automatically setup to load-balance on the Galera nodes and perform health checks to direct traffic to fully functional Galera nodes. The install script also tunes the TCP stack of the instance where HAProxy is installed. You can install HAProxy on dedicated hosts, or collocate an HAProxy instance with each application server.

To install HAProxy on MySQL Cluster running on Debian:

./install-haproxy.sh <haproxy server> debian mysqlcluster

9. Integration with ClusterControl  

ClusterControl is integrated with HAProxy, so you are able to deploy and manage it from the ClusterControl application.

It is also possible to navigate to the HAProxy Stats page

The HAProxy process will be managed by ClusterControl, and is automatically restarted if it fails.