I'd like to set up a zero-maintenance hands-off MySQL replication regimen for backup purposes.
It's been my experience, though, that these things inevitably eventually get out of sync. Perhaps it's due to developer abuse, unscheduled accidents, or other unknowns. But I'd like to periodically re-sync the slave(s) just to make sure that an unnoticed replication problem doesn't go uncorrected long-term.
Does anyone have a working/tested solution for going about this?
Here's my proposed solution; but I'm open to other possiblities from anyone who has experience dealing with this issue:
Presumably this is as simple as doing on the master:
mysqldump --all-databases --master-data >dbdump.sqlAnd then loading that SQL file on each of the slaves, running
stop slavebefore andstart slaveafterward. Theoretically the slaves will use the updated master coordinates to re-establish synchronization, and off we go.Presumably this could be automated as a weekly job during off-peak hours.
But is there a better way to do this?
sshdirectly to the other end'smysqlcommand). It works, but is not used as a "zero-maintenance" approach, mostly it just spares the "optimize table" runs. The proper approach would be not to use MySQL replication at all. If it is just for backup purposes, consider either snapshotting the volume or using InnoDB to reduce locking contention during backup.