0

I've dumped a database from a single standard postgres database (official docker image). It was originally migrated from sqlite, but I couldn't use pgloader with the target cluster of 4 replicating postgres servers (probably because of password hashing issues).

I know have a pg_dump file. However, when I run pg_restore towards the cluster-leader (everything is postgres v17), the database size goes up (listed with \l+). But nothing shows up in the tables. If I run VACUUM FULL the size goes back to empty. I have the tables allready created in the target db (since owner, and other parameters may be mismatching).

I also tried using pg_restore to a sql file and then just run psql with \i file, looks like everything works, but nothing shows up in the database.

The dump file (-Fc) is about 40 Gb, the sql file is about 75 gb. This is only three tables, with almost all data in one of the tables.

Any idea on what is happening? Would it make a difference to just have a leader postgres, and add replicas later?

2 Answers 2

0

Backup restore happens in an isolated transaction (the I letter in ACID). It means, other transactions, including the ones you are in when you access the database using interactive clients, wouldn't see the data until it is committed, which is until the restore completes. Then you'll see all the data at once. The size will grow, of course, when restore process writes data to the database storage.

Also, if you interrupt the restore process, the transaction will be rolled back instead, you can't see the partial data that way. It's atomicity property (the A letter in ACID), which means "complete in full or act like it never even started".

For observing the progress and problems during the restore, you need to monitor the restoration process. The further help we can offer if you tell us exact commands that were used to create the backup and to restore it.

Also be prepared to the fact that data laid out for convenient quick access always consumes more space than compacted and compressed representation in the backup. It's noteworthy that databases are often very well compressible data, I was always fascinated by this observation.

0

I was backing up to the wrong 'schema'. My target db had the schema 'data', while the original data was in 'public'.

Running:

$ sed -ie 's|public|data|g' out.sql 

allowed me to import the file using psql. I had to fix some of the indexes using this method.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.