Docker Corporate Registry Best Practices

This is a question that recently arose in my head as we prepare to launch with our own private registry. What is enterprise best practice and why?

1:

  • Run multiple registries with 1 S3 storage server? Each registry has a parameter that forces it to click on the dev, qa or prod (top-level) folders in the same S3 bucket.

  • Run 1 registry with 1 S3 memory file for all dev / qa / prod environments? Since the whole point of Docker is that the image works the same anywhere, we just provide different docker launch options, since the docker image itself is exactly the same as in env, only the run arguments you pass are different.

  • Run 1 registry and 1 S3 storage server for each env

Q2:

What is the best practice for promoting an image from developer to production? What toolkits are involved. For example, we have a central gitlab for our Dockerfiles, when we check our new Docker file, there is a hook that starts Jenkins to create an image from this Docker file and checks it in the registry. What would be a good way to easily promote (if you did not choose option 2 earlier for Q1) images to the next level - qa and, ultimately, prod?

3:

If you are updating one of your base images, what is a good way to make sure that the change propagates upstream to other images in the registry? For example, you are updating your custom ubuntu dockerfile base with new materials and want the other Docker files that used this base image to be rebuilt and dragged into the registry so that the change automatically propagates.

Q4:

It plays a role in all of the above cases if you have different AWS accounts, 1 for DEV, one for QA, one for PROD, etc.

+5
source share
1 answer

We have chosen your option number two. I will simply describe what we use, and although we are not a large corporate environment (several hundred containers working at any given time), perhaps this will give you some ideas. We use a single S3 backend / bucket (on one account) for all our "types" of images and correctly place the images. The trick here is that our "dev" environments are mostly "production" environments. This is crucial for all of our design and indeed our raison d'etre for using Docker: so that our development environment fully reflects the production environment. We have several production machines on specialized hardware in several geographically distinct data centers, and we run our images on top of them. We use the standard Git workflow to enter code and changes. We currently use Bitbucket (and a mirror with our own Gitlab) and run tests on every click to master Shippable (a big Shippable fan here!). We coordinate everything with the help of special software that listens to webhooks on the "head" server, which then creates / tags / commits and pushes the docker image into our private registry (at some point it was also located on the same "head" server ) This custom software then transfers to a simple user server on each production computer, which extracts new images from the private registry and zero downtime - updates the new container instead of the old one. We use the incredible and amazing nginx-proxy reverse proxy Jason Wilder on all of our docker machines, which makes this process much easier than without it. If you have special requirements regarding the separation of these dev / QA / prod images from each other, I suggest that you plaster this backend as little as possible, unless you have more possible points of failure. The real power of docker is the uniformity of the environment that you can create. When we “flip the switch” for the container task from development to QA to production, all we do is change the port number that the container listens from our “dev / QA” port to the “production” port number and track these changes in our internal tracker. We use several A records in DNS for "load balancing" and do not need to scale to use real load balancers (for now), but we will probably use the nginx-proxy image load balancing function, which we love so much, since it already built-in function.

If for some reason we need to change our base image, we will make changes to the environment (updates, whatever), and then work FROM in our new Docker file. These changes then turn into a new image, which falls into our private registry, which then extends to production, like any other "regular" code. I should note that we reflected all our data on S3 (and not just on our images dockers of the final products) if something tragic happens, and we need to deploy a new “head” server (which itself uses the docker for all its functions) . FWIW, it was a damn price gap for us, having the opportunity to switch to specialized hardware instead of EC2. Good luck

+4
source

All Articles