4

I'm now using this in cron to backup all my databases in 1 sql.gz file:

0 0 * * * root mysqldump -u root -pPASSWORD --all-databases | gzip > /home/backup/db/`date +\%G-\%m-\%d`_db.sql.gz 

I'd like to have one .tar.gz file within whom X other archives for how many databases I have.. Is this possible?

1
  • Maybe sourceforge.net/projects/automysqlbackup is something for you. You would only have to zip the whole folder structure this script generates. You can do this with a post-script. Commented Jul 17, 2012 at 18:02

3 Answers 3

6

Something like this may work. It is un-tested, but only slightly different from what I am using for backups on my systems.

# define common vars OPTIONS="--verbose --lock-tables --flush-logs --force --quick --single-transaction" AUTHFILE="/etc/mysql/rootauth.cnf" BACKUPDIR="/srv/backup/mysql/" BACKUPDATE=`date +"%y%m%d%H"` # create temp folder (this isn't entirely safe, but be sure only root or backup user has # write access here, you might want to use mktemp) mkdir ${BACKUPDIR}/tmp/ # get a list of all the databases on the system DBSQL="SELECT SCHEMA_NAME FROM information_schema.SCHEMATA where SCHEMA_NAME!='information_schema' \ AND SCHEMA_NAME!='performance_schema' order by SCHEMA_NAME" DBS=`/usr/bin/mysql --defaults-extra-file=${AUTHFILE} --batch \ --skip-column-names --execute "$DBSQL"` DBS=`echo $DBS | tr -d '\n' | sed -e "s/ \+/ /g"` for DB in $DBS; do # perform a per-database dump BACKUPDIRDB="${BACKUPDIR}/tmp/${DB}" mkdir -p ${BACKUPDIRDB} /usr/bin/mysqldump --defaults-extra-file=${AUTHFILE} \ ${OPTIONS} $DB > ${BACKUPDIRDB}/backup_${BACKUPDATE} done # create archive of everything tar -czvf ${BACKUPDIR}/backup_${BACKUPDATE}.tar.gz ${BACKUPDIR}/tmp/ #remove temp files rm -rf ${BACKUPDIR}/tmp/ 
8
  • I'm going to try this and let you know if worked! ;) Commented Jul 17, 2012 at 17:52
  • I created a test.sh file in /srv to test it. I run it via "bash /srv/test.sh" but I got this result: /srv/test.sh: line 16: unexpected EOF while looking for matching ``' /srv/test.sh: line 30: syntax error: unexpected end of file What am I doing wrong? Commented Jul 17, 2012 at 18:01
  • I think I may have lost a backtick (`) on my copy and paste into the editor here. Check like 16 and line 30 and make sure the quotes are all properly balanced and closed. Commented Jul 17, 2012 at 18:05
  • I think I found it, I'm testing it again. ;) Commented Jul 17, 2012 at 18:09
  • Could not open required defaults file: /etc/mysql/rootauth.cnf Fatal error in defaults handling. Program aborted tar: Removing leading `/' from member names /srv/backup/mysql//tmp/ I don't have the file /etc/mysql/rootauth.cnf, I have my.cnf, should I replace it with that? Commented Jul 17, 2012 at 18:11
5

Create a script like this to mysqldump all databases in parallel

DBLIST=`mysql -uroot -pPASSWORD -ANe"SELECT GROUP_CONCAT(schema_name) FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" | sed 's/,/ /g'` MYSQLDUMP_OPTIONS="-uroot -pPASSWORD --single-transaction --routines --triggers" BACKUP_DEST=/home/backup/db/`date +\%G-\%m-\%d` mkdir ${BACKUP_DEST} for DB in `echo "${DBLIST}"` do mysqldump ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_DEST}/${DB}.sql.gz & done wait 

Then place this script in the crontab

If there are way too many databases, you could dump 5 at a time like this

DBLIST=`mysql -uroot -pPASSWORD -ANe"SELECT GROUP_CONCAT(schema_name) FROM information_schema.schemata WHERE schema_name NOT IN ('information_schema','performance_schema')" | sed 's/,/ /g'` MYSQLDUMP_OPTIONS="-uroot -pPASSWORD --single-transaction --routines --triggers" BACKUP_DEST=/home/backup/db/`date +\%G-\%m-\%d` mkdir ${BACKUP_DEST} COMMIT_COUNT=0 COMMIT_LIMIT=5 for DB in `echo "${DBLIST}"` do mysqldump ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_DEST}/${DB}.sql.gz & (( COMMIT_COUNT++ )) if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ] then COMMIT_COUNT=0 wait fi done if [ ${COMMIT_COUNT} -gt 0 ] then wait fi 

You can then add the tar commands to the script

13
  • I don't have the file /tmp/DBList.txt :( Yeah as Zoredache said I'd like 1 big archive with files per-db inside, sorry for my poor explanation.. Commented Jul 17, 2012 at 17:52
  • I'm on Ubuntu 12.04 and MySQL 5.5.24, if useful :) Commented Jul 17, 2012 at 17:54
  • I changed my answer to not use /tmp/DBList.txt Commented Jul 17, 2012 at 17:56
  • Probably it's my fault.. I created a .sh file with your script (first one). Then I run it from the terminal but I got this: line 1: unexpected EOF while looking for matching `'' line 9: syntax error: unexpected end of file Commented Jul 17, 2012 at 18:04
  • Okay, found the error. I fixed it and started the script, I get no errors but nothing happens.. the file doesn't get created.. :( Commented Jul 17, 2012 at 18:13
0

I would say that you could accomplish it by making a script that runs from the cron that performs a dump of each database and then the final operation of the script archives all of the files together into one .tar.gz

So in your instance, you would remove the --all-databases option and put the name of a database there. Then repeat that line for all the databases you have. Then after all the dumps have been made, create a tar with all those files and zip it up. Last but not least, perform any necessary cleanup. Put all of that into a script and then run the script from cron.

1
  • Yeah I thought of doing so, I was looking for a solution to "dump 'em all" without having to add a line for all of them (also because I add new DBs often and I'm worried to not remember to add one of them).. thanks anyway for the reply! ;) Commented Jul 17, 2012 at 17:47

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.