Is it wrong to start one process in docker without providing basic system services?

After reading the introduction of phusion / baseimage, I feel that I am creating containers from an Ubuntu image or any other official distribution and starting one application process inside the container is incorrect.

Main reasons:

  • There is no proper initialization process (which handles the processes of zombies and orphans)
  • No syslog service

Based on these facts, most of the official docker images available on the docker hub seem to be doing something wrong. For example, the MySQL image starts mysqld as the only process and does not provide any logging facilities other than messages written from mysqld to STDOUT and STDERR , available through docker logs .

Now the question is, which is a suitable way to start a service inside a docker container. Is it wrong to start only one application process inside the docker container and not provide basic Linux system services like syslog? Does it depend on the type of service running inside the container?

+7
docker
source share
2 answers

Check out this discussion for a good read on this issue. In principle, the official party line from Solomon Hykes and dockers is that docker containers should be as close as possible to microservices with single processes. There can be many such servers on one "real" server. If processes crash, you should just start a new docker container and not try to set up initialization, etc. Inside the containers. Therefore, if you are looking for canonical recommendations, the answer to this question does not contain the basic linux services. It also makes sense when you think that many docker containers work on the same node, do you really want all of them to run their own versions of these services?

They say that the state of registration in the docker service is famously broken . Even Solomon Hicks, the creator of the docker, acknowledges his work in the process. In addition, you usually require a little more flexibility for a real world deployment. Usually I mount my logs on the host system using volumes and run the log daemon, etc., running on the vm host. In the same way, I either install sshd or leave the interactive shell open in the container, so I can issue secondary commands without restarting, at least until I’m sure that my containers are airborne and no debugging is required.

Edit: With docker 1.3 and the exec command, you no longer need to "leave the interactive shell open."

+7
source share

It depends on the type of service you are using.

Docker allows you to β€œcreate, send and run any application anywhere” (from a website). This tells me that if the "application" consists of / requires several services / processes, then they should run in the same Docker container. It would be painful for the user to download and then run multiple Docker images just to run one application.

As a side note, splitting your application into multiple images depends on your drift configuration.

I see why you want to limit the docker container to one process. One reason is uptime. When creating a Docker security system, you must ensure that the container runs at a minimum so that scanning speed is fast. This means that if I can leave with the start of one process on the Docker container, then I have to go for it. But this is not always possible.

To answer your question directly. No, you do not need to run one process in docker.

NTN

+1
source share

All Articles