Discussion:
[lxc-users] What is right way to backup and restore linux containers?
Eax Melanhovich
2015-12-04 16:03:53 UTC
Permalink
Hello.

Lets say I have some container. I would like to run something like:

lxc-backup -n test-container my-backup.tgz

Then move backup somewhere (say, to Amazon S3). Then say I would like
to restore my container or create its copy on different machine. So I
need something like:

lxc-restore -n copy-of-container my-backup.tgz

I discovered lxc-snapshot, but it doesn't do exactly what I need.

So what is the right way of backuping and restoring linux containers?
--
Best regards,
Eax Melanhovich
http://eax.me/
Saint Michael
2015-12-04 16:32:11 UTC
Permalink
I was going t ask the same question.
It is a very important one. I am moving containers via rsync, but it takes
tooo long.
Post by Eax Melanhovich
Hello.
lxc-backup -n test-container my-backup.tgz
Then move backup somewhere (say, to Amazon S3). Then say I would like
to restore my container or create its copy on different machine. So I
lxc-restore -n copy-of-container my-backup.tgz
I discovered lxc-snapshot, but it doesn't do exactly what I need.
So what is the right way of backuping and restoring linux containers?
--
Best regards,
Eax Melanhovich
http://eax.me/
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Bostjan Skufca
2015-12-04 16:58:02 UTC
Permalink
Depends if you need consistent copy and how much downtime you can tolerate.

If inconsistent copy is enough, then you can run rsync over storage of
running container (on host, not in container) and be done with it.

Rsync:
I find rsync useful and fast, providing that:
- whole container filesystem metadata fits into memory
- not too much data changes between subsequent rsync runs

So, my simplified procedure is:
1: rsync #1 - does most of the work, takes time, but can be run on running
container
2: rsync #2 - to see how much files have changed since initial run (gives
good estimate of upcoming downtime)
3: lxc-stop -n container (on host 1)
4: rsync #3
5: lxc-start -n container (on host 2)

This procedure gives me about 10 seconds of downtime for containers with
small filesystems (up to 50GB).

Block device migration:
Rsync is fast if it operates on not-too-many-files. If you are getting long
rsync runs because of amount of small files, then you might be better off
migrating whole block device. You can go about it with LVM or btrfs
snapshots too. I do not usually use this.


LXC vs LXD:
LXC in generally single-host-centred, so you have to do things manually.
LXD on the other hand supports operations on multiple hosts, but others are
more qualified to summarize what is currently possible (creating LXD host
associations between on-premises and cloud-provider hosts etc.)

b.
Post by Saint Michael
I was going t ask the same question.
It is a very important one. I am moving containers via rsync, but it
takes tooo long.
Post by Eax Melanhovich
Hello.
lxc-backup -n test-container my-backup.tgz
Then move backup somewhere (say, to Amazon S3). Then say I would like
to restore my container or create its copy on different machine. So I
lxc-restore -n copy-of-container my-backup.tgz
I discovered lxc-snapshot, but it doesn't do exactly what I need.
So what is the right way of backuping and restoring linux containers?
--
Best regards,
Eax Melanhovich
http://eax.me/
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Saint Michael
2015-12-04 17:20:34 UTC
Permalink
What would it be the right tar parameters to compress and decompress all
the rootfs, including devices and special files?
Post by Bostjan Skufca
Depends if you need consistent copy and how much downtime you can tolerate.
If inconsistent copy is enough, then you can run rsync over storage of
running container (on host, not in container) and be done with it.
- whole container filesystem metadata fits into memory
- not too much data changes between subsequent rsync runs
1: rsync #1 - does most of the work, takes time, but can be run on running
container
2: rsync #2 - to see how much files have changed since initial run (gives
good estimate of upcoming downtime)
3: lxc-stop -n container (on host 1)
4: rsync #3
5: lxc-start -n container (on host 2)
This procedure gives me about 10 seconds of downtime for containers with
small filesystems (up to 50GB).
Rsync is fast if it operates on not-too-many-files. If you are getting
long rsync runs because of amount of small files, then you might be better
off migrating whole block device. You can go about it with LVM or btrfs
snapshots too. I do not usually use this.
LXC in generally single-host-centred, so you have to do things manually.
LXD on the other hand supports operations on multiple hosts, but others
are more qualified to summarize what is currently possible (creating LXD
host associations between on-premises and cloud-provider hosts etc.)
b.
Post by Saint Michael
I was going t ask the same question.
It is a very important one. I am moving containers via rsync, but it
takes tooo long.
Post by Eax Melanhovich
Hello.
lxc-backup -n test-container my-backup.tgz
Then move backup somewhere (say, to Amazon S3). Then say I would like
to restore my container or create its copy on different machine. So I
lxc-restore -n copy-of-container my-backup.tgz
I discovered lxc-snapshot, but it doesn't do exactly what I need.
So what is the right way of backuping and restoring linux containers?
--
Best regards,
Eax Melanhovich
http://eax.me/
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Bostjan Skufca
2015-12-04 17:36:15 UTC
Permalink
"man tar"

https://www.google.si/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=tar%20preserve%20everything

I would suggest -p and --numeric-owner and maybe --acl and --xattrs, but,
as I said before, I generally use rsync.

b.
Post by Saint Michael
What would it be the right tar parameters to compress and decompress all
the rootfs, including devices and special files?
Post by Bostjan Skufca
Depends if you need consistent copy and how much downtime you can tolerate.
If inconsistent copy is enough, then you can run rsync over storage of
running container (on host, not in container) and be done with it.
- whole container filesystem metadata fits into memory
- not too much data changes between subsequent rsync runs
1: rsync #1 - does most of the work, takes time, but can be run on
running container
2: rsync #2 - to see how much files have changed since initial run (gives
good estimate of upcoming downtime)
3: lxc-stop -n container (on host 1)
4: rsync #3
5: lxc-start -n container (on host 2)
This procedure gives me about 10 seconds of downtime for containers with
small filesystems (up to 50GB).
Rsync is fast if it operates on not-too-many-files. If you are getting
long rsync runs because of amount of small files, then you might be better
off migrating whole block device. You can go about it with LVM or btrfs
snapshots too. I do not usually use this.
LXC in generally single-host-centred, so you have to do things manually.
LXD on the other hand supports operations on multiple hosts, but others
are more qualified to summarize what is currently possible (creating LXD
host associations between on-premises and cloud-provider hosts etc.)
b.
Post by Saint Michael
I was going t ask the same question.
It is a very important one. I am moving containers via rsync, but it
takes tooo long.
Post by Eax Melanhovich
Hello.
lxc-backup -n test-container my-backup.tgz
Then move backup somewhere (say, to Amazon S3). Then say I would like
to restore my container or create its copy on different machine. So I
lxc-restore -n copy-of-container my-backup.tgz
I discovered lxc-snapshot, but it doesn't do exactly what I need.
So what is the right way of backuping and restoring linux containers?
--
Best regards,
Eax Melanhovich
http://eax.me/
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Bostjan Skufca
2015-12-04 17:42:34 UTC
Permalink
"man tar"

https://www.google.si/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=tar%20preserve%20everything

I would suggest -p and --numeric-owner and maybe --acl and --xattrs, but,
as I said before, I generally use rsync.

b.
Post by Saint Michael
What would it be the right tar parameters to compress and decompress all
the rootfs, including devices and special files?
Post by Bostjan Skufca
Depends if you need consistent copy and how much downtime you can tolerate.
If inconsistent copy is enough, then you can run rsync over storage of
running container (on host, not in container) and be done with it.
- whole container filesystem metadata fits into memory
- not too much data changes between subsequent rsync runs
1: rsync #1 - does most of the work, takes time, but can be run on
running container
2: rsync #2 - to see how much files have changed since initial run (gives
good estimate of upcoming downtime)
3: lxc-stop -n container (on host 1)
4: rsync #3
5: lxc-start -n container (on host 2)
This procedure gives me about 10 seconds of downtime for containers with
small filesystems (up to 50GB).
Rsync is fast if it operates on not-too-many-files. If you are getting
long rsync runs because of amount of small files, then you might be better
off migrating whole block device. You can go about it with LVM or btrfs
snapshots too. I do not usually use this.
LXC in generally single-host-centred, so you have to do things manually.
LXD on the other hand supports operations on multiple hosts, but others
are more qualified to summarize what is currently possible (creating LXD
host associations between on-premises and cloud-provider hosts etc.)
b.
Post by Saint Michael
I was going t ask the same question.
It is a very important one. I am moving containers via rsync, but it
takes tooo long.
Post by Eax Melanhovich
Hello.
lxc-backup -n test-container my-backup.tgz
Then move backup somewhere (say, to Amazon S3). Then say I would like
to restore my container or create its copy on different machine. So I
lxc-restore -n copy-of-container my-backup.tgz
I discovered lxc-snapshot, but it doesn't do exactly what I need.
So what is the right way of backuping and restoring linux containers?
--
Best regards,
Eax Melanhovich
http://eax.me/
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Mark S.S. Slatter
2015-12-04 20:04:17 UTC
Permalink
For what it's worth, my notes from various research indicate:

======= MOVING LXC CONTAINERS =====
When moving from one server to another, important NOT to copy it. All
permissions
inside container must be preserved. This is done by the following:

On OLD server
tar --numeric-owner -czvf mycontainer.tgz /var/lib/lxc/my_container

On NEW server
tar --numeric-owner -xzvf mycontainer.tgz -C /var/lib/lxc/

Be sure to set the appropriate IP address info on new location if it will
change.
==============================
I believe this is what I used the last time and it worked.

Thanks,

Mark.
Post by Bostjan Skufca
"man tar"
https://www.google.si/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=tar%20preserve%20everything
I would suggest -p and --numeric-owner and maybe --acl and --xattrs, but,
as I said before, I generally use rsync.
b.
Post by Saint Michael
What would it be the right tar parameters to compress and decompress all
the rootfs, including devices and special files?
Post by Bostjan Skufca
Depends if you need consistent copy and how much downtime you can tolerate.
If inconsistent copy is enough, then you can run rsync over storage of
running container (on host, not in container) and be done with it.
- whole container filesystem metadata fits into memory
- not too much data changes between subsequent rsync runs
1: rsync #1 - does most of the work, takes time, but can be run on
running container
2: rsync #2 - to see how much files have changed since initial run
(gives good estimate of upcoming downtime)
3: lxc-stop -n container (on host 1)
4: rsync #3
5: lxc-start -n container (on host 2)
This procedure gives me about 10 seconds of downtime for containers with
small filesystems (up to 50GB).
Rsync is fast if it operates on not-too-many-files. If you are getting
long rsync runs because of amount of small files, then you might be better
off migrating whole block device. You can go about it with LVM or btrfs
snapshots too. I do not usually use this.
LXC in generally single-host-centred, so you have to do things manually.
LXD on the other hand supports operations on multiple hosts, but others
are more qualified to summarize what is currently possible (creating LXD
host associations between on-premises and cloud-provider hosts etc.)
b.
Post by Saint Michael
I was going t ask the same question.
It is a very important one. I am moving containers via rsync, but it
takes tooo long.
Post by Eax Melanhovich
Hello.
lxc-backup -n test-container my-backup.tgz
Then move backup somewhere (say, to Amazon S3). Then say I would like
to restore my container or create its copy on different machine. So I
lxc-restore -n copy-of-container my-backup.tgz
I discovered lxc-snapshot, but it doesn't do exactly what I need.
So what is the right way of backuping and restoring linux containers?
--
Best regards,
Eax Melanhovich
http://eax.me/
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Saint Michael
2015-12-05 02:33:45 UTC
Permalink
I normally copy containers with rsync, to a different server, and with
plain " cp -dpR --sparse=never" in the same box. Should I use tar instead?
I noticed that rsync is slow, even in the same network
Post by Mark S.S. Slatter
======= MOVING LXC CONTAINERS =====
When moving from one server to another, important NOT to copy it. All
permissions
On OLD server
tar --numeric-owner -czvf mycontainer.tgz /var/lib/lxc/my_container
On NEW server
tar --numeric-owner -xzvf mycontainer.tgz -C /var/lib/lxc/
Be sure to set the appropriate IP address info on new location if it will
change.
==============================
I believe this is what I used the last time and it worked.
Thanks,
Mark.
Post by Bostjan Skufca
"man tar"
https://www.google.si/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=tar%20preserve%20everything
I would suggest -p and --numeric-owner and maybe --acl and --xattrs, but,
as I said before, I generally use rsync.
b.
Post by Saint Michael
What would it be the right tar parameters to compress and decompress all
the rootfs, including devices and special files?
Post by Bostjan Skufca
Depends if you need consistent copy and how much downtime you can tolerate.
If inconsistent copy is enough, then you can run rsync over storage of
running container (on host, not in container) and be done with it.
- whole container filesystem metadata fits into memory
- not too much data changes between subsequent rsync runs
1: rsync #1 - does most of the work, takes time, but can be run on
running container
2: rsync #2 - to see how much files have changed since initial run
(gives good estimate of upcoming downtime)
3: lxc-stop -n container (on host 1)
4: rsync #3
5: lxc-start -n container (on host 2)
This procedure gives me about 10 seconds of downtime for containers
with small filesystems (up to 50GB).
Rsync is fast if it operates on not-too-many-files. If you are getting
long rsync runs because of amount of small files, then you might be better
off migrating whole block device. You can go about it with LVM or btrfs
snapshots too. I do not usually use this.
LXC in generally single-host-centred, so you have to do things manually.
LXD on the other hand supports operations on multiple hosts, but others
are more qualified to summarize what is currently possible (creating LXD
host associations between on-premises and cloud-provider hosts etc.)
b.
Post by Saint Michael
I was going t ask the same question.
It is a very important one. I am moving containers via rsync, but it
takes tooo long.
Post by Eax Melanhovich
Hello.
lxc-backup -n test-container my-backup.tgz
Then move backup somewhere (say, to Amazon S3). Then say I would like
to restore my container or create its copy on different machine. So I
lxc-restore -n copy-of-container my-backup.tgz
I discovered lxc-snapshot, but it doesn't do exactly what I need.
So what is the right way of backuping and restoring linux containers?
--
Best regards,
Eax Melanhovich
http://eax.me/
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
John Lewis
2015-12-05 12:10:52 UTC
Permalink
What I do is store my containers in a disk image with a filesystem,
usually ext4. I store the image in the LXC server's /opt. I mount the
LXC's to /srv before starting them because I haven't figured out how to
run them directly out of the disk images yet. I back up the disk images
with rsnapshot with a sparse option. It saves a lot of time because
there is only one file to backup instead of hundreds for each LXC.

To restore I mount the disk image and rsync the target file back to the
original container or copy up the whole container disk image over the
one that wasn't in the the state I needed it to be in. To back up
databases, you need to make sure you get a database dump before the
backup. The way I like to do it is by using a remote ssh command and
dump the database over an ssh socket from the backup machine, I copy the
dump command up using standard input and copy the database dump back
down using standard output. Keeping database files on a separate image
file is helpful to reduce the size of backups but not required.
Post by Saint Michael
I was going t ask the same question.
It is a very important one. I am moving containers via rsync, but it
takes tooo long.
Hello.
lxc-backup -n test-container my-backup.tgz
Then move backup somewhere (say, to Amazon S3). Then say I would like
to restore my container or create its copy on different machine. So I
lxc-restore -n copy-of-container my-backup.tgz
I discovered lxc-snapshot, but it doesn't do exactly what I need.
So what is the right way of backuping and restoring linux containers?
--
Best regards,
Eax Melanhovich
http://eax.me/
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Fajar A. Nugraha
2015-12-05 13:28:59 UTC
Permalink
Post by John Lewis
What I do is store my containers in a disk image with a filesystem,
usually ext4. I store the image in the LXC server's /opt. I mount the LXC's
to /srv before starting them because I haven't figured out how to run them
directly out of the disk images yet. I back up the disk images with
rsnapshot with a sparse option. It saves a lot of time because there is
only one file to backup instead of hundreds for each LXC.
... and that, is one of the reasons more and more people use zfs :)

tar -> basically can't do incremental snapshot
rsync on rootfs -> very long incremental backup time if you have lots of
files
rsync on disk image -> still need to read the whole image, checksum every
"block", and compare it (source vs destination), so still relatively slow,
particularly if your image is big. Even when only a single byte changed.

Also, on those three, you need to shutdown the container to get a
consistent backup (or at least, "lxc-freeze")

zfs snapshot + send receive -> should be much faster than any of the above
methods for incremental backups, since basically it already knows "what has
changed between snapshots". If you only have small amount of changed data
between snapshots, the incremental send/receive will be very fast. Plus, on
most scenarios, no need to shutdown/stop the container.


To restore I mount the disk image and rsync the target file back to the
Post by John Lewis
original container or copy up the whole container disk image over the one
that wasn't in the the state I needed it to be in. To back up databases,
you need to make sure you get a database dump before the backup. The way I
like to do it is by using a remote ssh command and dump the database over
an ssh socket from the backup machine, I copy the dump command up using
standard input and copy the database dump back down using standard output.
Keeping database files on a separate image file is helpful to reduce the
size of backups but not required.
That's the "normal", common database-recommended method. Safe, but slow. In
particular if your have a large db (e.g. > 10GB)

The "quick-and-relatively-safe" way is to use snapshots (e.g. like the zfs
scenario I wrote above). Most modern database can survive an unclean
shutdown (e.g. like what happens when the server crashed, or you experience
power failure), so as long as all the necesssary files (usually data files
and journal) can be snapshotted at the same time, you should be able to
recover using the snapshot.

IIRC brfs should also support snapshot and incremental send/receive, but I
haven't tested it personally.
--
Fajar
Loading...