Working in a old RoR application I was asked to updgrade Postgresql from 9.5 up to the latest possible. The RoR application is deployed on premises via Docker Compose.
The postgres documentation explains and recomends:
We always recommend that all users run the latest available minor release for whatever major version is in use. https://www.postgresql.org/support/versioning/
Fortunately my current version is the last minor version for 9.5 -> 9.5.25. Then I will upgrade from 9.5 to 9.6.
Some reasonable steps for migrating in this conditions are described in:
- https://christierney.com/2022/03/09/upgrading-postgresql-on-docker-compose/ By Chris Tierney
- https://betterprogramming.pub/how-to-upgrade-your-postgresql-version-using-docker-d1e81dbbbdf9 By José Postiga
Finally, I want to achieve my goal without ssh into the server and manual steps. I would like to achieve this setting up a manual CI pipeline|job and pushing some simple changes like renaming docker-compose volumes as the unique human intervention making this process as repeatable and as less error prompt as possible 🙏.
In this case Bitbucket-Pipeline. Any way this approach should work with some adapts on whatever CI service allows to execution of custom bash commands. (Jenkins, GitlabCi, among others).
Hopefully, after the first round of this upgrading cycle I will be able to perform the rest of the upgrades just changing numbers 🙏.
After said all my preconditions, requirements, and ambitious. Let's put the hands on the keyboard.
This is my docker-compose.yml simplified for the sake of simplicity. Only services and needed details for my purpose are shown.
version: '2.1' services: web: image: my_ror_image depends_on: - pg pg: image: postgres:9.5-alpine volumes: - ./tmp/db:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=pass - POSTGRES_USER=user - POSTGRES_DB=db_prod_pass hostname: pg
Step 1: Adapt my docker-compose in the way that the specs involve the version.
version: '2.1' services: web: image: my_ror_image depends_on: - pg pg: image: postgres:9.5-alpine volumes: - ./tmp/db_9.5:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=pass - POSTGRES_USER=user - POSTGRES_DB=db_prod_pass hostname: pg
Here, the name of host directory mapped to the container directory is using the version. It will allow me to conserve both versions in the automatic upgrade process in order to being able to rollback in case of failure.
Step 2: Deploy the app with the current approach
Here before doing any upgrade we will re-deploy our app using different folder for storing/sharing data with/between our host and the pg container.
ssh to your server, create the directory with the name including the version, and copy all content of the previous pg-data directory.
mkdir <app-dir>/tmp/db_9.5
cp <app-dir>/tmp/db <app-dir>/tmp/db_9.5
- redeploy the last version including addapted docker-compose
The new deployed app will take data from the directory that includes the version on the name.
This is step is manual but will never be repated, it is part of the
preparation.
Step 3: Create a docker-compose file for the upgrade process
version: '2.1' services: new_pg: image: postgres:${VERSION_TO}-alpine volumes: - ./tmp/db_${VERSION_TO}:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=password - POSTGRES_USER=user - POSTGRES_DB=db_labs old_pg: image: postgres:${VERSION_FROM}-alpine volumes: - ./tmp/db_${VERSION_FROM}:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=password - POSTGRES_USER=user - POSTGRES_DB=db_labs
Here we are for convinient using environment variables for set the involved versions. This docker compose will start two containers with the both FROM and TO versions of postgres. We will use them for dumping loading 😉
Step 4: The Bitbucket Pipeline
upgrade_postgres: - variables: - name: DeployCode default: staging allowed-values: - production - staging - labs - name: UP_PG_FROM default: '9.6' allowed-values: - '9.6' - '10' - '11' - '12' - '13' - '14' - name: UP_PG_TO default: '10' allowed-values: - '10' - '11' - '12' - '13' - '14' - '15' - step: name: 'upgrade-postgres' script: - *prepare_ssh - scp -o StrictHostKeyChecking=no ./docker-compose-${DeployCode}.yml ${SSH_SERVER_ADDRESS}:/web-app/ - scp -o StrictHostKeyChecking=no ./docker-compose-pg-upgrade.yml ${SSH_SERVER_ADDRESS}:/web-app/ - ssh_connect_to_deployment_server "cd /web-app && docker-compose -f docker-compose-${DeployCode}.yml down --remove-orphans" - ssh_connect_to_deployment_server "cd /web-app && VERSION_FROM=$UP_PG_FROM VERSION_TO=$UP_PG_TO docker-compose -f docker-compose-pg-upgrade.yml up -d" - ssh_connect_to_deployment_server "mkdir -p /web-app/tmp/db_load_$UP_PG_TO" - ssh_connect_to_deployment_server "cd /web-app && docker-compose -f docker-compose-pg-upgrade.yml exec -T old_pg mkdir db_dump_$UP_PG_FROM" - ssh_connect_to_deployment_server "cd /web-app && docker-compose -f docker-compose-pg-upgrade.yml exec -T new_pg mkdir /db_load_$UP_PG_TO" - ssh_connect_to_deployment_server "cd /web-app && docker-compose -f docker-compose-pg-upgrade.yml exec -T old_pg pg_dumpall -U user -f /db_dump_$UP_PG_FROM/dump.sql" - ssh_connect_to_deployment_server "docker cp wm-web-app-${DeployCode}_old_pg_1:/db_dump_$UP_PG_FROM/dump.sql /web-app/tmp/db_load_$UP_PG_TO/dump.sql" - ssh_connect_to_deployment_server "docker cp /web-app/tmp/db_load_$UP_PG_TO/dump.sql wm-web-app-${DeployCode}_new_pg_1:/db_load_$UP_PG_TO/dump.sql" - ssh_connect_to_deployment_server "cd /web-app && docker-compose -f docker-compose-pg-upgrade.yml exec -T new_pg psql -U user -d postgres -f /db_load_$UP_PG_TO/dump.sql" - ssh_connect_to_deployment_server "cd /web-app && docker-compose -f docker-compose-pg-upgrade.yml down --remove-orphans"
This bitbucket pipeline stop the our online application. Start two postgres container with both versions. Dump the FROM version and load it in the TO version, helping with docker cp
feature and bitbucket pipeline variables feature.
ℹ️ Just make sure the
VERSION_FROM
andVERSION_TO
values match the
subfolder names in yourtmp/
volume folder. That’s what keeps
everything clean and rollback-ready.
Step 5: Run this pipeline manually on bitbucket.
Step 6: Update your production docker compose and redeploy.
After a successful upgrade:
-
Update your production
docker-compose
to use the new version and its corresponding volume:volumes: - ./tmp/db_10:/var/lib/postgresql/data image: postgres:10-alpine
Commit this change and redeploy.
Conclusion
What started as a scary task of upgrading an old Postgres DB turned out to be a repeatable and safe process. With just a few abstractions (a second Docker Compose file and a CI job), I turned it into a hands-off operation. Next time I want to upgrade, all I’ll do is change the FROM
and TO
versions and hit “Run pipeline”.
💡 Automation is not just about speed — it’s about reliability and peace of mind.
Hope this helps you on your own upgrade journey! 🙌
Top comments (0)