The Severalnines team is pleased to announce the release of ClusterControl 1.2.6. This release contains key new features along with performance improvements and bug fixes. We have outlined some of the key features below.
Highlights of ClusterControl 1.2.6 include:
For additional details about the release:
Centralized Authentication using LDAP or Active Directory: ClusterControl now supports Active Directory and LDAP authentication. This allows users to log into ClusterControl by using their corporate credentials instead of a separate password. LDAP groups can be mapped onto ClusterControl user groups to apply roles to the entire group.
In the wake of recent concerns and debates raised around the Heartbleed bug, we wanted to update Severalnines ClusterControl users on any impact this bug might have on ClusterControl & associated databases and/or applications.
If your ClusterControl's web application has been accessible on the internet, then most likely you have also been exposed to the Heartbleed OpenSSL security bug, see: http://heartbleed.com for more details.
By default, our database deployment script enables SSL encryption for the Apache web server on the Controller host with a generated private SSL key and a self-signed certificate. SSL encryption is used between the UI and the Controller REST API if you have clusters added with HTTPS, which we do by default. The content that is encrypted (and which an attacker could potentially get access to via this bug) is primarily monitoring and ClusterControl application data.
Test your server for Heartbleed: https://www.ssllabs.com/ssltest
You should generate a new private key and certificate if you are concerned that your Controller server has been compromised and of course immediately upgrade the OpenSSL package for your distribution.
First create a self-signed certificate by following the instructions in this post:http://www.akadia.com/services/ssh_test_certificate.html
Then, install the new private key and certificate by updating your Apache web server configuration and restart the web server.
The Severalnines Team Will Be Onsite at Booth 418 - Come Say Hello!
If you’re still thinking about whether or not you should be attending the Percona Live MySQL Conference & Expo that starts on Monday 31st of March, here are 9 reasons (we are Severalnines after all) on why we’ll be there. May these help you positively in your decision making ;-)
Sounds like a cliché-reason to be listed here, but this year’s program is in fact quite diversified in its content and speakers. Of course, all of the expert technical content that one would expect from such a conference is there and it would be pointless to try and list them all here; the array of speakers is impressive and seems to include quite a few ‘user-speakers’ as opposed to ‘industry-speakers’.
Interestingly though, there are also talks or sessions being offered this year, which we wouldn’t have seen in previous years (or if we just missed them, please shout).
These talks and sessions are entirely relevant to the audience expected at the conference and yet, they stand out, probably because they seem less obvious and don’t fall into the wider techie talk category.
Take for example Erin O’Neill (Blackbird)’s proposed session on ‘MySQL & women - where are all the women?’: here’s a topic that stands out (though it shouldn’t have to of course); it’s awesome that this session is being offered and that the conference creates a forum for that topic. And it is a great question: where ARE all the women that work with MySQL-related technology (as opposed to at MySQL or in a MySQL focussed company)? If some of these women are reading this post, please raise your hands; and join the discussion next week.
Geoffrey Anderson (Box)’s talk ‘To hire or to train, that is the question’ is another great example. It’s an absolutely relevant topic for many of the participants in the conference, though it may not be an immediately obvious. However, many participants of the conference are likely faced with the challenge of finding and/or developing the best talent possible for their organisations. So this sounds like a promising talk as well and it’s great that it is being offered.
And there many other ‘quirky’ technical talks as well in addition to a whole range of quality talks with high-value content.
With datacenters being stretched by resource-intensive applications, more and more businesses are outgrowing their existing in-house capacity and having to reconfigure their IT operations. But how do you migrate a busy application to a totally new data center without downtime? How will the application scale in a virtualized cloud environment? And how do you guard against cloud server failures and keep a high level of uptime?
In this example, we will show you how to migrate a Web application (Wordpress) from a local data center to a AWS VPC. Without downtime even!
Note that we will cluster the database using Galera Cluster. This will make sure the database is load balanced on multiple servers, and failure handling and recovery is all managed by the cluster software. In our test scenario, we will deploy one web server with Wordpress, but it is of course possible to add more web server instances and replicate the filesystem. See this blog for more info.
The high-level architecture can be illustrated as follows:
1. We will deploy a Virtual Private Cloud (VPC) in AWS so we can assign a static private IP address on each node. Login to the AWS management console and choose “VPC with a Single Public Subnet”. Configure the VPC name and subnet as follows:
And click “Create VPC” to start creating your private cloud.
SugarCRM is the leader in open source CRM systems, and has been adopted by some of the largest firms, including IBM. The CRM software includes all sales, marketing and support tools out of the box, and can also be extended to integrate social media sources. For those depending on SugarCRM, especially when deploying in cloud environments with lower SLAs, having a high availability architecture can make a lot of sense.
In this blog post, we will show you how to cluster SugarCRM Community Edition with MySQL Galera Cluster. For simplicity, we will use NFS as the shared storage system (storage1) but keep in mind that storage1 is a single point of failure. Have a look at our previous blogs on how to deploy other shared file systems like GlusterFS, OCFS2, GFS2, csync2 with lsyncd or CephFS.
We will use a total of 5 servers. SugarCRM will be co-hosted with the Galera nodes, NFS storage will be co-hosted with the primary HAproxy while ClusterControl will be co-hosted with secondary HAproxy node. The high-level architecture is illustrated in following figure:
Turn SElinux and firewalls off on all nodes. They should have the following host definitions in /etc/hosts:
192.168.197.100 virtual-ip mysql 192.168.197.101 haproxy1 keepalived1 storage1 192.168.197.102 haproxy2 keepalived2 clustercontrol 192.168.197.111 web1 galera1 192.168.197.112 web2 galera2 192.168.197.113 web3 galera3
Thanks to everyone who attended this week’s webinar; if you missed the sessions or would like to watch the webinar again & browse through the slides, they are now available online.
Special thanks to Seppo Jaakola, CEO at Codership, the creators of Galera Cluster, and to Johan Andersson, CTO at Severalnines, for their presentations and the live demo.
Webinar topics discussed:
Watch the replay:
For ops folks with multiple environments and instances to manage, a fully programmable infrastructure is the basis for automation. ClusterControl exposes all functionality through a REST API. The web UI also interacts with the REST API to retrieve monitoring data (cluster load, alarms, backup status, etc.) or to send management commands (add/remove nodes, run backups, upgrade a cluster, add/remove load balancer, etc.). The API is written in PHP and runs under Apache. The diagram below illustrates the architecture of ClusterControl.
Figure: ClusterControl - Agentless Architecture
In this blog post, we will show you how to interact directly with the ClusterControl API to retrieve monitoring data or to perform management tasks.
All requests against ClusterControl API URL should include the ClusterControl API Token as HTTP header (CMON_TOKEN) for authentication. The ClusterControl API URL and token can be retrieved from the Cluster Registrations page in the ClusterControl UI.
The request URI with query strings should be in the following format:
<ClusterControl API URL>/<API group>.json?clusterid=<Cluster ID>&_dc=<Unix Timestamp>&<options>
Here is a summary of resources & tools that we’ve made available to you in the past weeks. If you have any questions on these, feel free to contact us!
We are pleased to announce the release of ClusterControl 1.2.5, which now supports MySQL 5.6 and Global Transaction IDs to enable cross-datacenter and cloud replication over high latency networks. Galera users are now able to assign nodes to their respective datacenter. Other features include User Defined Alerts and agent-less monitoring.
Galera is slowly but surely establishing itself as a credible replacement for traditional MySQL master-slave architectures. But how do you migrate? Does the schema or application change? What are the limitations? Can you migrate without service interruption?
Join Codership CEO Seppo Jaakola and Severalnines CTO Johan Andersson for this webinar and find out how you can migrate.
The Severalnines team is pleased to announce the release of ClusterControl 1.2.5. This release contains key new features along with performance improvements and bug fixes. We have outlined some of the key features below.
For additional details about the release:
Highlights of ClusterControl 1.2.5 include:
Hybrid replication, i.e. combining Galera and asynchronous MySQL replication in the same setup, became much easier with MySQL 5.6 and GTID. Although it was fairly straightforward to replicate from a standalone MySQL server to a Galera Cluster, doing it the other way round (Galera → standalone MySQL) was a bit more challenging. At least until MySQL 5.6 and GTID.
There are a few good reasons to attach an asynchronous slave to a Galera Cluster. For one, long-running reporting/OLAP type queries on a Galera node might slow down an entire cluster, if the reporting load is so intensive that the node has to spend considerable effort coping with it. So reporting queries can be sent to a standalone server, effectively isolating Galera from the reporting load. In a belts and suspenders approach, an asynchronous slave can also serve as a remote live backup.
In this blog post, we will show you how to replicate a Galera Cluster to a MySQL server with GTID, and how to failover the replication in case the master node fails.
In MySQL 5.5, resuming a broken replication requires you to determine the last binary log file and position, which are distinct on all Galera nodes if binary logging is enabled. We can illustrate this situation with the following figure:
If the MySQL master fails, replication breaks and the slave will need to switch to another master. You will need pick a new Galera node, and manually determine a new binary log file and position of the last transaction executed by the slave. Another option is to dump the data from the new master node, restore it on slave and start replication with the new master node. These options are of course doable, but not very practical in production.