Post by John Lewis
What I do is store my containers in a disk image with a filesystem,
usually ext4. I store the image in the LXC server's /opt. I mount the LXC's
to /srv before starting them because I haven't figured out how to run them
directly out of the disk images yet. I back up the disk images with
rsnapshot with a sparse option. It saves a lot of time because there is
only one file to backup instead of hundreds for each LXC.
... and that, is one of the reasons more and more people use zfs :)
tar -> basically can't do incremental snapshot
rsync on rootfs -> very long incremental backup time if you have lots of
rsync on disk image -> still need to read the whole image, checksum every
"block", and compare it (source vs destination), so still relatively slow,
particularly if your image is big. Even when only a single byte changed.
Also, on those three, you need to shutdown the container to get a
consistent backup (or at least, "lxc-freeze")
zfs snapshot + send receive -> should be much faster than any of the above
methods for incremental backups, since basically it already knows "what has
changed between snapshots". If you only have small amount of changed data
between snapshots, the incremental send/receive will be very fast. Plus, on
most scenarios, no need to shutdown/stop the container.
To restore I mount the disk image and rsync the target file back to the
Post by John Lewis
original container or copy up the whole container disk image over the one
that wasn't in the the state I needed it to be in. To back up databases,
you need to make sure you get a database dump before the backup. The way I
like to do it is by using a remote ssh command and dump the database over
an ssh socket from the backup machine, I copy the dump command up using
standard input and copy the database dump back down using standard output.
Keeping database files on a separate image file is helpful to reduce the
size of backups but not required.
That's the "normal", common database-recommended method. Safe, but slow. In
particular if your have a large db (e.g. > 10GB)
The "quick-and-relatively-safe" way is to use snapshots (e.g. like the zfs
scenario I wrote above). Most modern database can survive an unclean
shutdown (e.g. like what happens when the server crashed, or you experience
power failure), so as long as all the necesssary files (usually data files
and journal) can be snapshotted at the same time, you should be able to
recover using the snapshot.
IIRC brfs should also support snapshot and incremental send/receive, but I
haven't tested it personally.