Recommended way to launch a Docker Compose stack in production?

I have several compose files (docker-compose.yml) describing a simple Django application (five containers, three images).

I want to launch this stack during production - so that the entire stack starts at boot, and that the containers are restarted or recreated if they fail. There are no volumes that I care about, and the containers will not contain any important condition and can be recycled at will.

I did not find much information about using specially selected dockers in production this way. The documentation is useful, but nothing is said about starting at boot, and I am using Amazon Linux, so there is no (currently) access to the docker machine. I'm used to using supervisord for babysit processes and ensuring that they start loading, but I donโ€™t think this is the way to do this with Docker containers, since they are ultimately under the control of the Docker daemon?

As a simple start, I just want to put restart: always on all my services and make an init script to execute docker-compose up -d on boot. Is there a recommended way to efficiently manage the stack to build dockers?

EDIT: I'm looking for a โ€œsimpleโ€ way to safely execute the docker-compose up equivalent for my container stack. I know in advance that all containers declared on the stack can be on the same machine; in this case, I do not need to configure containers from the same stack in multiple instances, but it is also useful to know.

+6
source share
4 answers

Compose is a client tool, but when docker-compose up -d started, all container parameters are sent to Engine and saved. If you specify restart as always (or preferably unless-stopped to provide you more flexibility ), you do not need to run docker-compose up every time your host boots up.

When the host starts up, if you have configured the Docker daemon to start at boot, Docker will start all containers marked as restarted. Therefore, you only need to run docker-compose up -d once , and Docker will take care of the rest.

Regarding the layout of containers across multiple nodes in Swarm, the preferred approach would be to use Distributed Application Packages , but is currently (as of Docker 1.12) experimental. Basically, you will create a package from the local Compose file that represents your distributed system, and then delete it remotely into the swarm. Docker is moving fast, so I expect the functionality to be available soon.

+7
source

You can find additional information in your documentation about using docker-compose in the production process. But, as they point out, compose is primarily aimed at a development and testing environment.

If you want to use your containers in production, I would suggest you use a suitable tool for organizing containers like Kubernetes .

+1
source

If you can organize your Django application as a swarmkit service (docker 1.11+), you can organize the execution of your application using a task.

Swarmkit has a reload policy (see swarmctl flags )

Reboot Policies . The orchestration level controls tasks and responds to failures based on the specified policy.
The operator can determine the conditions for rebooting, delays and restrictions (the maximum number of attempts in a given time window). SwarmKit may decide to restart the task on another machine. This means that the faulty nodes will gradually exhaust their tasks.

Even if your โ€œclusterโ€ has only one node, the level of orchestration will ensure that your containers always work and work.

+1
source

You say you use AWS, so why not use the ECS , which is built for your request. You create an application that is a package of your 5 containers. You will configure which and how many EC2 instances you want in your cluster.

You just need to convert your docker-compose.yml to a specific Dockerrun.aws.json, which is not difficult.

AWS will launch your containers upon deployment and also restart them in the event of a failure

+1
source

All Articles