Applications would typically connect to a cluster by opening connections on one of the nodes in order to run transactions. If the database node fails, the client would need to reconnect to another database node before it can continue to serve requests.
There are different ways to provide connectivity to one or more database servers. One way is to use a database driver that supports connection pooling, e.g.
PHP driver: http://php.net/manual/en/book.mysqlnd-ms.php
However, for Galera, the JDBC and PHP drivers are not aware of internal Galera state information. For instance, a Galera donor node is read-only while it is helping another node resynchronize.
Another solution is to use a load balancer between the clients and the database cluster.
HAProxy stands for High Availability Proxy, and is a great TCP/HTTP load balancer. It distributes a workload across a set of servers to maximize performance and optimize resource usage.
A front-end application that relies on a database backend can easily over-saturate it with too many concurrent running connections. HAProxy provides queuing and throttling of connections towards one or more MySQL Servers and prevents a single server to become overloaded with too many requests.
All clients connect to the HAProxy instance, and the proxy forwards the connection to one of the available MySQL Servers based on the load-balancing scheme used.
HAProxy allows for different load-balancing schemes, e.g. roundrobin or leastconn. In the Severalnines HAProxy deploy scripts, we use the leastconn scheme so that the database server with the lowest number of connections receives the connection. If the database servers have the same number of connections, then roundrobin is performed to ensure that all servers are used.
HAProxy determines if a server is available for request routing by performing so called health checks. By default, a health check consists of trying to establish a TCP connection to the server.
This is usually not enough for a robust setup with a database backend since the database server itself could be in a non-operational state while still being able to respond to connection requests.
Severalnines provides an HTTP request based health check which determines whether a MySQL Server is available or not by carefully examining its internal state which depends on the clustering solution used.
There are several user definable parameters that determine how fast HAProxy will be able to detect that a server is not available.
As default, the interval between health checks for a server that is "up", transitionally "up or down" or not yet checked is 2 seconds. The interval is 5 seconds when the server is 100% down or unreachable.
A server will be considered down/unavailable after 3 consecutive unsuccessful health checks, and will be operational again after 2 successful health checks.
This setup simplifies a number of things:
When a database node fails, the database connections that have been opened on that node will also fail. It is important that HAProxy does not redirect new connection requests to the failed node.
It is possible to have HAProxy check that a server is up, but just making a connection to the MySQL port (usually 3306) is not good enough. The instance might be up, but the underlying storage engine might not be working properly. There are specific checks to be done, depending on whether the clustering type is Galera or MySQL Cluster.
Therefore, we use custom scripts, managed through the xinetd daemon, to make HAProxy check the status of a MySQL server. If a MySQL server is not available, it is automatically excluded from the list of available servers. From the HAProxy administration interface it is also possible to manually take out a server from the connection pool, e.g., for maintenance or for testing and validating new configuration parameters or optimized queries, before bringing it back into production.
Since all applications will be depending on HAProxy to connect to an available database node, one can have multiple HAProxy instances to avoid a single point of failure.
One possible setup is to install an HAProxy on each web server (or application server making requests on the database). This works fine if there are only a few web servers, so as the load introduced by the health checks is kept in check. The web server would connect to the local HAProxy (e.g. making a mysql connection on 127.0.0.1:3306), and can access all the database servers. The Web and HAProxy together forms a working unit, so the web server will not work if it's HAProxy is not available.
It is also possible to set up 2 HAProxy instances in Active-Passive mode (using Keepalived). Fail-over would then be automatic. At the moment of this writing, the HAProxy deploy script does not set this up automatically.
Severalnines has created deployment scripts for HAProxy. These are available on GitHub.
The scripts work with any Galera or MySQL Cluster setup that has been deployed using the Severalnines Configurator.
The HAProxy scripts should be cloned into the install directory of the existing deployment package on the ClusterControl server. (The deployment package generated at www.severalnines.com/config). The HAProxy deploy command is executed from the ClusterControl server to install HAProxy on any host.
E.g., the following installs HAProxy on host 10.176.131.76, using Galera on a Debian platform:
./install-haproxy.sh 10.176.131.76 debian galera
By default, the HAProxy server will listen for connections on port 33306. In the example above, you can connect your application server(s) to 10.176.131.76:33306, and requests will be load balanced on the MySQL servers.
Do not forget to GRANT access from the HAProxy server to the MySQL Servers, because the MySQL Servers will see the HAProxy making the connections, not the Application server(s) itself. In the example above, issue on the MySQL Servers the access rights you wish:
GRANT select ON mydb.* TO 'appuser'@'10.176.31.76' IDENTIFIED BY 'password';
As discussed above, the HAProxy instance will be automatically setup to load-balance on the Galera nodes and perform health checks to direct traffic to fully functional Galera nodes. The install script also tunes the TCP stack of the instance where HAProxy is installed. You can install HAProxy on dedicated hosts, or collocate an HAProxy instance with each application server.
To install HAProxy on MySQL Cluster running on Debian:
./install-haproxy.sh <haproxy server> debian mysqlcluster
Once installed, the HAProxy instance is integrated within ClusterControl, so it is possible to view its status from the ClusterControl GUI, as seen in the below diagram.
Clicking on the link brings up the HAProxy Stats page.
The HAProxy process will be managed by ClusterControl, and is automatically restarted if it fails.