I have a production server running Postgres 9.4. The database is > 10 GB. Is it possible to upgrade to Postgres 9.5 without downtime and without loosing data?
Upgrade tutorials recommend to stop Postgres while executing sudo pg_upgradecluster 9.4 main, but this may take a long time. Updating a 10 GB cluster can take several hours!
I also tried pg_dump mydb > db.sql. Dropping the database and inserting the dump again in PG 9.4 (psql -d mydb -f db.sql) took about 50 minutes.
But inserting the dump in PG 9.5 finished only after mor than 7 hours. Especially creating the index was really slow...
2016-07-18 00:13:55 CEST [60358-5] ERROR: canceling autovacuum task 2016-07-18 00:13:55 CEST [60358-6] CONTEXT: automatic analyze of table ... 2016-07-18 00:36:20 CEST [60366-1] ERROR: canceling autovacuum task 2016-07-18 00:36:20 CEST [60366-2] CONTEXT: automatic analyze of table ... 2016-07-18 04:21:40 CEST [60361-1] ERROR: canceling autovacuum task 2016-07-18 04:21:40 CEST [60361-2] CONTEXT: automatic analyze of table ... 2016-07-18 07:55:19 CEST [61316-1] ERROR: canceling autovacuum task 2016-07-18 07:55:19 CEST [61316-2] CONTEXT: automatic analyze of table ... So neither pg_upgradecluster nor pg_dump is an acceptable solution. Even with PG 4 you would have a downtime of at least 50 minutes. Therefore: How can databases be upgraded on production servers or big master-slave-clusters without downtime and dataloss?