- Notifications
You must be signed in to change notification settings - Fork 57
Description
I have been using z3 for over a month successfully. Thank you! I have recently run into a problem and have a potential enhancement suggestion.
I have a large dataset that is stored with compression on. Recently, full uploads to S3 started failing. The error from S3 indicates the dataset exceeded 5T (6.3T). Two issues:
-
Firstly, I expected pput to break this into chunks, but it did not seem to do that:
zfs send 'pool/data@zfs-auto-snap_daily-2018-12-23-0832' | pput --quiet --estimated 6947160277512 --meta size=6947160277512 --meta isfull=true z3-backup/pool/data@zfs-auto-snap_daily-2018-12-23-0832 -
Secondly the dataset itself, in its compressed state is ~2.6T. So, I modified snap.py code to add "-Lce" to "zfs send". This seems better. The upload is still running.
zfs send -Lce 'pool/data@zfs-auto-snap_daily-2018-12-23-0832' | pput --quiet --estimated 2655377785352 --meta size=2655377785352 --meta isfull=true z3-backup/pool/data@zfs-auto-snap_daily-2018-12-23-0832 -
It would be good to allow custom options to be passes to zfs send as part of z3 backup. E.g. --zfs-options "Lce"