Docker capture container

When running a running container with docker commit , does this create a consistent file system snapshot?

I am considering this approach for backing up containers. You just need a docker commit <container> <container>:<date> and push it to the local registry.

The backup will be incremental, as commit will just create a new layer.

Would also a large number of layers be severely damaged when working in a container? Is there a way to remove intermediate levels at a later point in time?

Edit

As agreed, I mean that every application designed to survive power loss should be able to recover these images. This basically means that no file should change after the snapshot starts.

In the meantime, I found out that docker supports several storage drivers (aufs, devicemapper, btrfs). Unfortunately, there is virtually no documentation about the differences between them and the options that they support.

+13
docker backup unionfs
Jun 02 '14 at 8:02
source share
2 answers

I assume consistency is what you define.

In terms of smoothing and reducing stacks, there are too many AUFS layers, see https://github.com/dotcloud/docker/issues/332

docker flatten here.

+1
Jun 14 '14 at 23:16
source share

I am in a similar situation. I am thinking about not using a special data volume container, instead regularly fixing some incremental backups. In addition to incremental backups, the approach to team development has a great advantage. As a newbie, you can simply docker pull a database image containing all the data needed to run, debug, and develop.

So now I'm pausing before committing:

 docker pause happy_feynman; docker commit happy_feynman odev:`date +%s` 

As far as I can tell, I have no problems right now. But this is a developing machine, so I have no experience working on servers with a heavy load.

0
Mar 25 '15 at 13:11
source share



All Articles