Dockerizing a Larger PostgreSQL Installation: What could possibly go wrong?
Migrating existing systems that have worked flawlessly for years is not always an easy job.
And, as always, these systems have to be updated from time to time to use newer versions of software. This is a story of such a migration of a larger PostgreSQL instance (> 500GiB databases for a fleet management company) from a traditional bare metal installation to a modern dockerized version. This new architecture was chosen because it should be easier to upgrade to newer versions of PostgreSQL when they are available, but also it should be easier to test the same configuration before deploying on production. Also, all the manual work for the instances setup and management was scripted with Ansible.
This presentation will cover the following topics:
Choosing and creating the Docker images for dividing the required functionalities in different containers.
Partitioning the stored data to different storage spaces (SSDs, HDDs…)
- Loading the production data into the database
- Replication the data from master to slave, fail over from master to slave, recreating slaves from a previous master.
- Creating backups and archiving WALs for point in time recovery (PITR) and devising retention policies for these data.
- Monitoring the instances and automating log analysis.
- Managing the all process steps with Ansible.
We have encountered multiple problems when performing this migration and we will present the most interesting ones along with brief information of how we have solved them.
I am a developer and part time PG admin, working at Bull Croatia for 6 years, and now Atos Croatia for the last 6 months (Atos bought Bull last summer) on payment systems, customs software and integration with complex systems. Currently I am working with Docker and Ansible to automate system installations, updates and monitoring.
mail: [email protected]