I have a 15 GB compressed PostgreSQL dump (as a pgbackup), that when extracted is closer to 60 GB. I have a cloud server with only 69 GB of free space (54 GB after the pgbackup is copied onto the box), and I was wondering if there was a cunning way that I could split the pgbackup into lots of smaller compressed pgbbackup files, delete the original large pgbackup and then iterate over the smaller pgbackups, decompressing, executing them and then deleting them, so that I can end up with the entire dataset in a PostgreSQL database given the disk constraints.
1 Answer
Here is a bunch of possible workarounds. Chose one meeting your requirements.
via stdin
The answer is quite simple - use a pipe:
xzcat dumpFile.xz | pg_restore .... This way you don't need additional disk space for extracting the compressed dump file.
See man pg_restore:
-f filename, --file=filename
Specify output file for generated script, or for the listing when used with -l.
Default is the standard output.
via stdin and ssh
If you run to low of disk space to copy the compressed dump file to the server use ssh for invocation:
xzcat dumpFile.xz |ssh -T user@dbserver pg_restore ... pg_restore via network
Also an alternative is to reconfigure your postgreSQL database to allow the connection from your local workstation. In this case you may use pg_restore on your local system with setting PGHOST, PGPORT, PGUSER to the correct values.
splitting the dump
- Download the dump to a machine with enough disk space.
- Using the
splitcommand (or any similar tool) to cut it down to smaller pieces. - Remove the dump from the database server.
On the database server create a job like waiting for receiving partial dump files, e.g.
while sleep 10 do if [ -r dumpPart ] then cat dumpPart echo "next part is loaded" >&2 rm -f dumpPart fi done | pg_restore ....Upload step for step your dump parts with the destination name dumpPart. Ensure that the speed is reasonable lower as the sleep statement above.
- That'd work if I had 75 GB or more space (enough for the underlying pgbackup of 15 GB and the uncompressed 60 GB version). Sadly I only have 69 GB.Noxville– Noxville2013-03-11 13:10:33 +00:00Commented Mar 11, 2013 at 13:10
- @Noxville see update with
sshusageH.-Dirk Schmitt– H.-Dirk Schmitt2013-03-11 13:18:18 +00:00Commented Mar 11, 2013 at 13:18 - I don't really see the point in downloading the entire dump onto another machine to restore it. Technically the information should be uncompressable in a piece-wise manner (not just one big bang shot). I also don't have access to another machine where I could ssh-dump from.Noxville– Noxville2013-03-11 13:32:49 +00:00Commented Mar 11, 2013 at 13:32
- @Noxville See update with split hack.H.-Dirk Schmitt– H.-Dirk Schmitt2013-03-11 13:51:05 +00:00Commented Mar 11, 2013 at 13:51