How to use Docker in development / deployment workflow?

I'm not sure I fully understand the role of Docker in the development and deployment process.

  • Let's say I create a Docker file with nginx, some database, and something else that creates a container and works fine.

  • I drop it somewhere in the cloud and run it to install and configure all the dependencies and environment settings.

  • Next, I have a repository with a web application that I want to run inside the created and deployed container in the first 2 steps. I regularly work on this and push changes.

Now, how do I integrate a web application into a container?

  • Do I put it as a dependency inside the Docker file that I create in the first step, and each time I create a container from scratch?
  • Or deploy the container once, but are there procedures inside the Dockerfile that install utils that extract code from the repo command or through interceptors?
  • What if the container works, but I want to change some settings, say, nginx? Add these changes to the Dockerfile and recreate the image?

In general, what is the role of Docker in everyday application development? How often is it used if the infrastructure works fine and only the code changes?

+5
source share
1 answer

I think there is no single โ€œuse of just thisโ€ answer - as you have already described, there are various viable concepts.

Deployment per stage / production / pre-production

a)

I put it as a dependency inside the Docker file that I create in the first step, and each time restore the container from scratch?

This is probably the most docker way and is fully consistent with the philosophy of docker. It is very portable, reproducible and contains everything from one container to a swarm of thousands. For instance. this concept has no problems, suddenly scaling horizontally when you need more containers, say, due to heavy traffic / load.

It is also consistent with the idea that only configuration / data should be dynamic in the docker container, and not in code / binaries / artifacts

This strategy should be selected for use in production, therefore, when deployments do not occur so often. If you are worried about downtime during container rebuilds (when upgrading), there are good concepts to handle this too.

We use it for production and pre-production purposes.

b)

Or, do I deploy the container once, but the procedures inside the Dockerfile, which installs utils that retrieve code from repo by command or through hooks?

This is a more common practice for very frequent deployments. You can go for pulling (what you said) or push concept (docker cp / ssh scp), while I assume that the latter is preferable in such an environment.

We use this for any search strategy for instance instances, which basically should reflect the current "base base" and its status. We also use this for smoke and CI tests, but depending on the application. If the application really changes its dependencies a lot, and a clean assembly requires rebuilding with those who really guarantee that the material is checked, as expected, we will actually restore the image during CI.


Configuration management

1.

What if the container works, but I want to change some settings, say, nginx? Add these changes to the Dockerfile and recreate the image?

I do not use this as c), since it is configuration management, not application deployment, and the answer to this question can be very complicated, depending on your case. In general, if redistribution requires configuration changes, it depends on your configuration management, if you can go with b) or always have to go a).

eg. if you use https://github.com/markround/tiller with the consul as a backend, you can redirect configuration changes to the consul by regenerating the configuration using the tiller using the consul watch -prefix /configuration tiller as a one-hour task to respond to these cost changes . This allows you to b) and correct the configuration.

You can also use https://github.com/markround/tiller when deploying, for example. change ENV vars or some yml file (tiller supports different servers), and also call tiller during deployment independently. This is most likely you need to have ssh or you are ssh on the host and use docker cp and docker exec

Development

In development, you usually use your docker-compose.yml file, which you use for production, but overload it with docker-compose-dev.yml, for example. mount your code folder, set RAILS_ENV = development, reconfigurat / simulate other configurations like xdebug or more detailed nginx loggin, whatever you need. You can also add some fake MTA services like fermata etc.

docker-compose -f docker-compose.yml -f docker-compose-dev.yml up

docker-compose-dev.yml only overloads some values, it does not override or duplicate it.

Depending on how powerful your configuration management is, you can also perform a preset during development.

We actually use scaffolding for this, we use https://github.com/xeger/docker-compose , and after launching it, we use docker exec and docker cp to preinstall the instance or step. Some examples are provided here https://github.com/EugenMayer/docker-sync/wiki/7.-Scripting-with-docker-sync

If you are developing for OSX and you are experiencing performance issues due to the common OSXFS / code sections, you probably want to take a look at http://docker-sync.io (I'm biased, though)

+1
source

All Articles