Thanks to everyone who attended yesterday’s webinar; if you missed the sessions or would like to watch the webinar again & browse through the slides, they are now available online.
Thanks again to our speaker, Seppo Jaakola from Codership, the creators of Galera Cluster, for this in-depth talk on Galera Cluster Best Practices - Zero Downtime Schema Changes.
As OpenStack deployments mature from evaluation/development to production environments supporting apps and services, high-availability becomes a key requirement. In a previous post, we showed you how to cluster the database backend - which is central to the operation of OpenStack. In that setup, you would have two controllers, while placing a 3-node Galera cluster on separate hosts. Now, it can be quite a leap to go from one VM with all services running on it, to a fully distributed setup with 5 VMs. The good news is that you can have a highly available setup starting with just 3 VMs.
In this post, we are going to show you how to cluster OpenStack Havana in a minimal node setup with 2 controllers and one compute node. Our controllers will be running all OpenStack services, as well as clustered RabbitMQ and MySQL. A third node will have a mandatory Galera Arbitrator (garbd) colocated with a ClusterControl server. The third node can also serve as an OpenStack compute node.
We will be using Ubuntu 12.04 LTS 64bit codename Precise on all of our hosts. The installation will be performed as root user, so we expect invocation of “sudo -i” command on each SSH session. Make sure to install NTP service on all hosts and use ntpdate to sync from the controller node NTP daemon. IPtables is turned off by default. All hosts should have two network interfaces, one for external network while the other one is for OpenStack’s internal usage. Only the controller node has a static IP assigned on eth1 while others will remain unconfigured.
Our setup can be illustrated as follows:
The Severalnines team is pleased to announce the release of ClusterControl 1.2.4. This release contains key new features along with performance improvements and bug fixes.
We have outlined some of the key features below. For additional details about the release:
Highlights of ClusterControl 1.2.4 include:
MongoDB specific features:
Database schema changes are usually not popular among DBAs or sysadmins, not when you are operating a cluster and cannot afford to switch off the service during a maintenance window. There are different ways to perform schema changes, some procedures being more complicated than others. We invited Seppo from the Codership team to tell us about the options. If you’d like to learn more, please register for our new webinar.
Tuesday, December 3rd 2013
We’re particularly excited about this year’s Percona Live London MySQL Conference. The line-up of speakers & topics looks excellent and it’s good to see speakers from Oracle, Percona, the MariaDB Foundation (amongst others) scheduled at the same event. It demonstrates not just the diversity of the ever broadening MySQL ecosystem, but also the fact that there really is room for everyone to contribute, participate in and advance MySQL in manifold directions while still retaining a certain amount of uniformity.
And this is how we will be contributing to the event ...
Correct tuning of MySQL NDB Cluster can have dramatic impact on performance. As a distributed shared-nothing system, it is quite sensitive to tuning of communication buffers, or correct partitioning of the schema.
In this session, we will look at different tuning aspects of MySQL Cluster.
We will look closely at the new parameters and status variables of MySQL Cluster 7.3 to determine issues.
Data protection is vital for DB admins, especially when it involves data that is accessed and updated 24 hours a day. Clustering and replication are techniques that provide protection against failures, but what if a user or DBA issues a detrimental command against one of the databases? A user might erroneously delete or update the contents of one or more tables, drop database objects that are still needed during an update to an application, or run a large batch update that fails midway. How do we recover lost data?
In a previous post, we showed you how to do a full restore from backup. Great, now you’ve restored up to the last incremental backup that was done at 6am this morning. But how about the rest of the data?
This is where you’d do a point-in-time recovery, to recover your data prior to the transaction that caused the problem. The procedure involves restoring the database from backups prior to the target time for recovery, then uses a redo log to roll the database forward to the target time. This procedure is a noteworthy practice that every DBA or sysadmin should be familiar with.
In this blog, we will show you how to do a point-in-time recovery of your Galera Cluster. An important component here is the MySQL binary log, which contains events that describe all database changes. After the latest backup has been restored, the events in the binary log that were recorded after the backup was taken will be re-executed. Thus, it is possible to replay transactions up to the last consistent state of the database, right before the erroneous command was issued.
Last week, the Severalnines & Codership teams came together to co-host two webinar sessions on Galera 3.0, MySQL 5.6, Global Transaction IDs and WAN. The sessions were held during EMEA/APAC as well as NA/LATAM timezones, which worked out quite nicely. Our speakers were Seppo Jaakola from Codership & Vinay Joosery from Severalnines.
We had a lot of questions during and also after the webinar sessions, so we thought we’d post all the answers here, as well as the video replay and slides.
Galera uses a preallocated file with a specific size called gcache, used to store the writesets in circular buffer style. By default, its size is 128MB. In this post, we are going to explore how to leverage gcache to improve the operation of a Galera cluster.
We have a four node Galera cluster, using the latest release 23.2.7(r157). We have a table called t1 that is replicated by Galera on all nodes. The cluster nodes have allocated the default 128MB gcache.size, and we’ll try to execute a large writeset to see how gcache responds.
Let’s create a big writeset using LOAD DATA. The writeset size is about 200 MB in size:
mysql> LOAD DATA LOCAL INFILE '/tmp/mysql_statistics.sql' INTO TABLE t1 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n'; Query OK, 3784725 rows affected (4 min 2.94 sec) Records: 3784725 Deleted: 0 Skipped: 0 Warnings: 0
You will notice gcache.page files will be generated to contain the big writeset, as reported in the MySQL error log:
[Note] WSREP: Created page /var/lib/mysql/gcache.page.000001 of size 208431655 bytes [Note] WSREP: Deleted page /var/lib/mysql/gcache.page.000001
We are going to shut down one of the Galera nodes (node1) to see how it performs when rejoining the cluster:
$ service mysql stop
Database vendors regularly issue critical patch updates to address software bugs or known vulnerabilities, but for a variety of reasons, organizations are often unable to install them in a timely manner, if at all. Evidence suggests that companies are actually getting worse at patching databases, with an increased number violating compliance standards and governance policies1.
Patching that require database downtime would be of extreme concern in a 24*7 environment, however most cluster upgrades can be performed online. ClusterControl performs a rolling upgrade of the cluster, upgrading and restarting one node at a time. The logical upgrade steps might slightly differ between the different cluster types.
In this post, we are going to show you how to upgrade your database cluster. Some things to be aware of:
Join this technical webinar to learn about the new features in the latest Galera 3.0 release.
You'll learn how Galera integrates with MySQL 5.6 and Global Transaction IDs to enable cross-datacenter and cloud replication over high latency networks. The benefits are clear; a globally distributed MySQL setup across regions to deliver Severalnines availability and real-time responsiveness.