0

I'm trying to create a one-off snapshot of 5.5TB of data to 3 external 2TB drives. The data is on an XFS partition, so the logical option seemed to be xfsdump, as it can span multiple devices. As a test, I've created a number of smaller partitions on one of the drives so I can force it to reach the end of the drive in minutes instead of hours. The external drive contains two partitions, /dev/sde1 and /dev/sde2, both 256MB in size.

This is the command I'm using:

xfsdump -o -l0 -s daily.0/jones -f /dev/sde2 -p 10 /snaps

I'm prompted for dump and media labels and the backup begins. After around 30 seconds, I get these messages:

 xfsdump: ending media file xfsdump: media file size 750000128 bytes xfsdump: dump size (non-dir files) : 728189552 bytes xfsdump: NOTE: dump interrupted: 27 seconds elapsed: may resume later using -R option xfsdump: Dump Status: INTERRUPT 

That's pretty much what I was hoping to see, as it hit the end of the available space on /dev/sde2.

Running xfsdump -I looks good, too:

 # xfsdump -I file system 0: fs id: 767465ce-3031-4672-8341-dfb135d8a463 session 0: mount point: broze:/snaps device: broze:/dev/mapper/vg0-snaps time: Tue Nov 3 14:23:57 2009 session label: "dump1" session id: 531a3622-84af-4767-a54b-a1b11a962dcb level: 0 resumed: NO subtree: YES streams: 1 stream 0: pathname: /dev/sde2 start: ino 535 offset 0 end: ino 1260 offset 0 interrupted: YES media files: 1 media file 0: mfile index: 0 mfile type: data mfile size: 750000128 mfile start: ino 535 offset 0 mfile end: ino 1260 offset 0 media label: "drive1" media id: 721c35ba-e844-47f8-8692-0d3122d88093 xfsdump: Dump Status: SUCCESS 

Seems to indicate I should be able to resume the backup. However, if I run xfsdump with the -R flag and specify a new device to backup to, this is what I get:

 # xfsdump -R -o -l0 -s daily.0/jones -f /dev/sde1 -p 10 /snaps xfsdump: using file dump (drive_simple) strategy xfsdump: version 2.2.45 (dump format 3.0) - Running single-threaded ============================= dump label dialog ============================== please enter label for this dump session (timeout in 300 sec) -> dump1.contd session label entered: "dump1.contd" --------------------------------- end dialog --------------------------------- xfsdump: ERROR: resume (-R) option inappropriate: no interrupted level 0 dump to resume xfsdump: Dump Status: ERROR 

The /dev/sde2 partition does appear to contain a valid, if incomplete, backup as I can restore from it using the xfsrestore command.

Any idea how I can get xfsdump to span multiple devices? Is there a better way to accomplish this without manually trying to partition the data into 2TB chunks?

Thanks!

3 Answers 3

1

This is tricky. First I don't think I would recommend doing this if at all possible. Secondly, its probably better to back up just the data in chunks versus the file system. If however you must back it up as one continuous file system I would imagine you could create an lvm2 volume spanning your three drives as a virtual container of sorts and then pretty much dd/xfsdump the whole thing into it. Not sure if this is a reasonable approach but it should work. :)

Good luck

2
  • I'm curious why you think multi-volume backups are a bad idea? I can't think of a single backup system that doesn't do pretty much exactly what I described. Commented Nov 4, 2009 at 17:06
  • Oh I don't think multi volume backup is a bad idea. There is nothing wrong with it when it comes to data. I am just not a fan of multi volume backup when it comes to raw filesystem information, and in the case of xfsdump in particular I thought it might not be a good idea to use it in that way. If anyone has specific experience with multi volume xfsdump I'd like to know. :) Commented Nov 4, 2009 at 18:17
1

After much wrangling, I wasn't able to get the xfsdump solution to work. So I did what I should've done in the first place: tar.

 tar -cvM -L 1953383400 -f /mnt/backup1.tar . 

The filesystem is backed up to a tarball on the drive mounted in /mnt/. Adding the -M flag tells tar it will be creatinga multi-volume archive. The -L flag tells it the size of each volume. When the backup reaches that size, tar pauses and asks politely for a new volume:

 Prepare volume #2 for `backup1.tar' and hit return: 

Hitting ? gives a menu of options:

 Prepare volume #2 for `backup1.tar' and hit return: ? n name Give a new file name for the next (and subsequent) volume(s) q Abort tar y or newline Continue operation ! Spawn a subshell ? Print this list Prepare volume #2 for `backup1.tar' and hit return: 

So I unmount the external drive, attach the next one, mount it, enter n backup2.tar at the prompt and the backup continues.

Good ol' tar.

1

I tested basically the exact same use case you did here, except with dumping to file, and it worked without a problem. I suspect your usage is fine, but there is something else wrong. This probably isn't the best forum for it, but I've found that the developers on the XFS mailing list and #xfs IRC channel on the Freenode network are extremely helpful.

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.