Database server migration

Database server migration


Introduction

In every database life cycle there is always a moment when it’s time to ‘move’. There might be many reasons why: hard disc running out of space; performances of the network or system really ugly, or outdated hardware. Here at Aquafadas two latter factors pushed us to migrate. This article outlines the journey we went through and the setbacks we encountered. We hope it will be useful and answers some of the questions you might have.

Enjoy

Yann Ponzoni (Process and Quality Architect) and Christopher Petit (Tech Lead, Cloud Connect)

Timeline

Monday 9.30am, CTO enters the Cloud Connect office :

  • CTO: “Guys we’re changing our database!”
  • Team : “OK…”
  • CTO: “But we need to keep production going, no downtime!”
  • Team: “Yes… challenge accepted!!!”

Objectives

Successfully migrate production data between 2 data centres with no impact on the final users – zero downtime.

Key stats at the start

  • +200 GB of documents in the database
  • +300 Rows updated per second
  • +20 000 000 Rows fetched  per second

Technology

Our database uses PostgreSQL 9.3, not the most recent version but serves us best. Having had tested various PostgreSQL versions, this one proved to be best performing with our own database schema. We’re also investigating database structure modifications to take advantage of the latest PostgreSQL version, but this is a topic for a separate article.

Architecture

Database cluster consists of one master and two slave servers. Master server deals with reading and writing requests (SELECT, UPDATE, INSERT, DELETE) and slave server deals with reading requests only (SELECT). Requests are distributed by PgPool2: a load balancer and “pooler” for PostgreSQL.

Limitations

Technical limitations

Database size

Our database size exceeded a large but not the biggest size of 200 GB. However transferring 200 GB over network can be rather time consuming.

At Aquafadas we’ve been using PostgreSQL version 9.3 – the optimal version for us to deliver best request performance. However, it’s not BDR compliant (Bi-Directional-Réplication) stopping us from a very simple master-master replication migration.

Geographical distance

Two Data Centers were based in two countries. Our data had to be transferred between Amsterdam and Paris, which meant that we were not able to transfer immediately, which as a result would create instability over a period of time.

Security

  • Secure data and data transfer
  • Original Data Center locked by a provider, accessible only via rather unstable VPN IPSec

Functional limitations

Zero downtime

Considering a high number of clients using permanently our database, we couldn’t accept any service interruption. We had to find a way to keep it going while we were migrating data.

Transparency

We also had to ensure that our work was invisible to users despite the fact that we had to change DNS routes in order to manage the new server.

Potential options considered

Replication master-master

At the start, the master-master replication using PostgreSQL BDR seemed the best solution. We planned to create a Cross-Data Center database cluster  with a master to master replication. This way the two architectures could work alongside while replicating each other.

Pros

  • No downtime
  • Switch  from one Data Center to another
  • Rollback possible

 

Cons

  • Lack of compatibility with our architecture
    • BDR only works with PostgreSQL 9.5
    • Our database schemas not compatible with BDR
  • Slower system with our database schemas

 

Option rejected due to lack of compatibility.

pg_dump + pg_restore

We also considered a traditional backup method simply exporting our database to a SQL file and then importing it to a new server.

 

Pros

  • Database contained in one file can be restored on various servers in parallel

Cons

  • As soon as dump has started, any new data cannot be included
  • Large size of a dump file means a very lengthy transfer process
  • Very lengthy dump (~2h)
  • Even longer restoring process (~15h)

 

Option rejected due an unacceptably long downtime.

pg_basebackup + WAL transfer

We now considered a migration method system based on the pg_basebackup utility, allowing to store data base on a new, ready to use node. In this case copying is more of a physical than a logical process, meaning that we would be copying raw files from the database cluster from the existing Data Center supplier to a second node in a new Data Center. Once the file transfer is completed, the WALs (Write Ahead Logging) are sent to the second node and applied on the fresh database server at migration start to update the data. However we’ll need to create also the slaves of this new master once it’s up. Our client load it’s too high to be served by only one node during the slaves creation. Approximately 2 hours in our case.

 

Pros

  • Database modifications taking place during migration can be retained
  • Significant reduction of system interruption
  • Allows for an asynchronous replication

 

Cons

  • WAL integration required ime to have a ready to use PostgreSQL Database
  • Higher degree of complexity
  • Cluster not ready straight after migration

 

Cascading replication

Finally, we considered a cascaded replication, where we simply attached Data Center B database as a slave node of an existing slave in the Data Center A. We have chosen to create a connection by slave to slave to reduce the load of the first master node. This poor master has already lot of things to do with his two original slaves. The result is a big Database cluster Cross-Data Center with 2 cascaded levels. Unfortunately this solution create some latency on changes replication but is largely acceptable.

The closest metaphor that could describe our process could be a mitosis principle when cells duplicate to create a new one.

Pros

  • The data still up to date on both Data centers
  • Zero system interruption
  • Cluster can be fitted before migration

 

Cons

  • Promoting the new cluster as master and switching on it require a minimum of time

Solutions comparison

Comparison table

Below timing are estimates only, based on our own database size (200GB) and could vary depending on individual values and infrastructure.

 

Pg_dump + pg_restore Pg_basebackup + WAL Cascading replication
DownTime 20h 2h <1mn
Including edited data during migration no yes yes
Restoring the entire Cluster no no Yes, before migration
Immediate rollback of the original infrastructure yes no No, but it’s possible to create a third cluster as a slave of the new one into the Data Center A

 

Chosen solution and results

New Data Center preparation

We started by replicating the same architecture in the new Data Center as in the current one (HaProxy, Serveurs Web, Server PostgreSQL…). We then created and emptied databases cluster to get them ready to be used with pg_basebackup… and installed our application.

Lines of code

 

Redirection of internet feed to minimise DNS impact

With every Data Center migration comes an issue of DNS configuration, which is very time consuming when it comes to deploying it across the globe. Early anticipation of this issue helps with making the migration transparent.

To tackle this challenge we decided to use a Load Balancer HAProxy on new IT. Some days before the migration date, we had set up the load balancer to forward the traffic on the old IT and change the DNS Record to use the IP address of the new IT. By this way the update of the DNS Record around the world could be done quietly. This Load balancer becomes for us the Fire button to launch the new IT by simply updating the configuration of this one.

Setting up the cascading replication

Right now, we’ll start to setting up the replication of the future master server (on the new IT) from a slave (on the old IT). To do that, we’ll use pg_basebackup with the connexion parameters of the old slave node. This procedure will take 2 hours, but, this is “replication”, so, the old Datacenter will continue to work without any downtime: The data will be transferred wia the replication system to the new Datacenter.
Once this is done, we will ask to the slaves nodes of the new architecture to follow the future master node.

Voila! Now we have a database cluster  on 3 levels, where each node gets updated accordingly to changes of the master node.

Lines of code

Stage 1: Start the copy from the old host to the new

Stage 2: Configuration update

Example for the first node

Final stage: Server launch

Final steps and SWITCH !

It’s all ready. Now it’s time to ensure that transactions taking place in old Data Center are completed before we migrate. To do that, our database got switched for maintenance for 30 seconds to ensure that latest data got replicated between Amsterdam and Paris. Additionally, to test that everything is working correctly, we added new lines in a table created for this event. If these lines appear in the new cluster, this means that the process worked.
During this time, we have elected the master node of the new datacenter and switch the load (thanks to the HAProxy) on the web servers of the new IT.

Code

Sources and references

Here are all the articles and some links that helped us come up with our solution.