Here is a summary of resources & tools that we’ve made available to you in the past weeks. If you have any questions on these, feel free to contact us!
We are pleased to announce the release of ClusterControl 1.2.5, which now supports MySQL 5.6 and Global Transaction IDs to enable cross-datacenter and cloud replication over high latency networks. Galera users are now able to assign nodes to their respective datacenter. Other features include User Defined Alerts and agent-less monitoring.
Galera is slowly but surely establishing itself as a credible replacement for traditional MySQL master-slave architectures. But how do you migrate? Does the schema or application change? What are the limitations? Can you migrate without service interruption?
Join Codership CEO Seppo Jaakola and Severalnines CTO Johan Andersson for this webinar and find out how you can migrate.
The Severalnines team is pleased to announce the release of ClusterControl 1.2.5. This release contains key new features along with performance improvements and bug fixes. We have outlined some of the key features below.
For additional details about the release:
Highlights of ClusterControl 1.2.5 include:
Hybrid replication, i.e. combining Galera and asynchronous MySQL replication in the same setup, became much easier with MySQL 5.6 and GTID. Although it was fairly straightforward to replicate from a standalone MySQL server to a Galera Cluster, doing it the other way round (Galera → standalone MySQL) was a bit more challenging. At least until MySQL 5.6 and GTID.
There are a few good reasons to attach an asynchronous slave to a Galera Cluster. For one, long-running reporting/OLAP type queries on a Galera node might slow down an entire cluster, if the reporting load is so intensive that the node has to spend considerable effort coping with it. So reporting queries can be sent to a standalone server, effectively isolating Galera from the reporting load. In a belts and suspenders approach, an asynchronous slave can also serve as a remote live backup.
In this blog post, we will show you how to replicate a Galera Cluster to a MySQL server with GTID, and how to failover the replication in case the master node fails.
In MySQL 5.5, resuming a broken replication requires you to determine the last binary log file and position, which are distinct on all Galera nodes if binary logging is enabled. We can illustrate this situation with the following figure:
If the MySQL master fails, replication breaks and the slave will need to switch to another master. You will need pick a new Galera node, and manually determine a new binary log file and position of the last transaction executed by the slave. Another option is to dump the data from the new master node, restore it on slave and start replication with the new master node. These options are of course doable, but not very practical in production.
Join us next Monday as we host the Stockholm MongoDB User Group Meetup in Kista, or the Wireless Valley as it is also referred to.
Our very own Vinay Joosery will be speaking about how to best automate the management & deployment of database clusters, specifically MongoDB clusters though the same principles apply for MySQL, MariaDB and Percona XtraDB based clusters. Henrik Ingo of MongoDB will be talking about Analytics with MongoDB & Hadoop. And Jim Dowling, a Senior Researcher at the Swedish Institute of Computer Science, will talk about a Hadoop PaaS platform.
So whether you’re from the MySQL or NoSQL world, there’ll be plenty of good content here to walk away with in addition to getting together with fellow open source database enthusiasts.
Analytics with MongoDB alone, and with Hadoop Connector
A Hadoop PaaS platform
Severalnines Automation and Cluster Management for MongoDB
Virtual Machines are great, and very useful when trying out new software. However, they might be an unnecessarily heavyweight solution when testing clusters, especially if these consist of multiple nodes running exactly the same software. Each VM runs a full-blown OS image. On the other hand, Linux Containers (LXC) are an efficient alternative to OS-level virtualization and can run multiple isolated systems on a single host. Docker is a wrapper around LXC, it automates the deployment of applications inside containers.
A notable advantage if you run with Docker + LXC is you can run many containers on a single host. They all share the same OS as the host, and when possible, the same binaries. Deployment can be extremely fast. Using Docker could be a good method if you want to spin multiple Galera nodes within a single host.
In this post, we will create MySQL Galera Cluster containers using Docker, and fire them up to form a database cluster running on a single host.
1. Docker works best on kernel 3.8 (due to a bug in LXC, at the time of writing). Install the kernel image for Raring:
$ sudo apt-get update $ sudo apt-get install -y linux-image-generic-lts-raring linux-headers-generic-lts-raring $ sudo init 6
2. Verify the kernel version:
$ uname -a Linux lxc 3.8.0-35-generic #52~precise1-Ubuntu SMP Thu Jan 30 17:24:40 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Galera is slowly but surely establishing itself as a credible replacement for traditional MySQL master-slave architectures.
The benefits are clear - a true multi-master InnoDB setup with built-in fail-over, potentially across data centers.
But how do you migrate? Does the schema or application change? What are the limitations? Can migration be done online, without service interruption? What are the potential risks, and how to address those?
Tuesday, March 11th 2014
In this webinar, Codership CEO Seppo Jaakola and Severalnines CTO Johan Andersson will walk you through what you need to know in order to migrate from standalone or a master-slave MySQL/MariaDB setup to Galera Cluster.
We will also see a live demo.
Great news for Percona customers! We are thrilled to announce our partnership with Percona. Effective immediately, Percona customers will be able to enjoy the advanced automation, monitoring and cluster management capabilities of ClusterControl. Percona will be bundling, with its Cluster support contracts, Percona ClusterControl - a privately branded version of ClusterControl Community. Together we're providing support for the full stack, from Percona XtraDB Cluster to management tools, giving customers one number to call.
For anyone who has deployed, managed or monitored a mission-critical database cluster, you will know that having the right tools can really make a difference. This is exactly our goal with ClusterControl, with a focus on operational management from day one. In only a few clicks, ClusterControl provides customers the ability to deploy Galera-based MySQL Clusters in private datacenters or in the cloud. It provides realtime visibility into the health and performance of the cluster. With datacenter environments becoming increasingly complex in nature, we believe that the automation and management capabilities of ClusterControl will be very attractive for ops people.
Liferay is an open-source content management system written in Java. It is used by a number of high traffic sites, as this survey suggests.
Clustering Liferay and other components such as the database and the file system is a good way to handle the performance requirements of a high traffic site. The latest Liferay version has introduced features that simplify clustering, such as built-in support for Ehcache clustering, Lucene replication, read/write splitting capabilities for database (in case if you run on master-slave architecture) and support for various file systems for the portal repository.
In this post, we are going to show you how to cluster Liferay in a multi-node load-balanced setup. The database backend will be based on Galera Cluster for MySQL, and the file system clustered using Ceph FS.
We will have a three-node database cluster, with two of the MySQL instances co-located with Liferay portal. The third MySQL instance is co-located with ceph-admin. Another two nodes, ceph-osd0 and ceph-osd1 will be used as a storage pool for Liferay repository using CephFS. ClusterControl will be hosted on ceph-osd1. We will be using Liferay Portal 6.2 Community Edition GA1 and all hosts are running on CentOS 6.4 64bit. All commands shown below are executed in root environment. SElinux and iptables are turned off.
Our hosts definition in all nodes:
192.168.197.111 liferay1 galera1 192.168.197.112 lifeary2 galera2 192.168.197.113 ceph-mds ceph-admin ceph-mon1 galera3 192.168.197.114 ceph-osd0 ceph-mon2 haproxy1 192.168.197.115 ceph-osd1 ceph-mon3 haproxy2 clustercontrol
Thanks to everyone who attended this week’s webinar; if you missed the sessions or would like to watch the webinar again and browse through the slides, they are now available online.
Special thanks to Seppo Jaakola from Codership, the creators of Galera Cluster, for walking us through the various scenarios of Galera recovery.
Webinar topics discussed:
Watch the replay:
Alfresco is a popular collaboration tool available on the open-source market. It is Java based, and has a content repository, web application framework and web content management system. For critical large-scale implementations that require 24*7 uptime, a multi-node cluster would be appropriate. Since Alfresco depends on external components such as the database and the filesystem, clustering the Alfresco instances only would not be enough.
In this post, we are going to show you how to deploy an active-active Alfresco cluster with MySQL Galera Cluster (database), GlusterFS (filesystem) and HAproxy with Keepalived (load balancer) to achieve redundancy of all the required system components.
Please note that clustering of Alfresco instances is only available in the Alfresco Enterprise. Hazelcast is used to provide multicast messaging between the web-tier nodes. This blog will be using the community edition, and note that e.g. site/user dashboard layout changes on one Alfresco node are not replicated to the other nodes since Hazelcast is only enabled in the enterprise edition. However, the content repository works fine in the community edition as the content files are on the same GlusterFS partition and the content metadata is replicated via Galera.
We will have a three-node Galera cluster co-located with Alfresco. Another two nodes, fs1 and fs2 will be used as a replicated file storage system for Alfresco content using GlusterFS. These two nodes will also have HAproxy and Keepalived for high availability load-balancing. ClusterControl will be hosted on fs2 to monitor Galera nodes. We will be using Alfresco version 4.2 (Community edition) and all hosts are running on Debian 7.2 Wheezy 64bit. All commands shown below are executed in root environment.