We have chosen your option number two. I will simply describe what we use, and although we are not a large corporate environment (several hundred containers working at any given time), perhaps this will give you some ideas. We use a single S3 backend / bucket (on one account) for all our "types" of images and correctly place the images. The trick here is that our "dev" environments are mostly "production" environments. This is crucial for all of our design and indeed our raison d'etre for using Docker: so that our development environment fully reflects the production environment. We have several production machines on specialized hardware in several geographically distinct data centers, and we run our images on top of them. We use the standard Git workflow to enter code and changes. We currently use Bitbucket (and a mirror with our own Gitlab) and run tests on every click to master Shippable (a big Shippable fan here!). We coordinate everything with the help of special software that listens to webhooks on the "head" server, which then creates / tags / commits and pushes the docker image into our private registry (at some point it was also located on the same "head" server ) This custom software then transfers to a simple user server on each production computer, which extracts new images from the private registry and zero downtime - updates the new container instead of the old one. We use the incredible and amazing nginx-proxy reverse proxy Jason Wilder on all of our docker machines, which makes this process much easier than without it. If you have special requirements regarding the separation of these dev / QA / prod images from each other, I suggest that you plaster this backend as little as possible, unless you have more possible points of failure. The real power of docker is the uniformity of the environment that you can create. When we “flip the switch” for the container task from development to QA to production, all we do is change the port number that the container listens from our “dev / QA” port to the “production” port number and track these changes in our internal tracker. We use several A records in DNS for "load balancing" and do not need to scale to use real load balancers (for now), but we will probably use the nginx-proxy image load balancing function, which we love so much, since it already built-in function.
If for some reason we need to change our base image, we will make changes to the environment (updates, whatever), and then work FROM in our new Docker file. These changes then turn into a new image, which falls into our private registry, which then extends to production, like any other "regular" code. I should note that we reflected all our data on S3 (and not just on our images dockers of the final products) if something tragic happens, and we need to deploy a new “head” server (which itself uses the docker for all its functions) . FWIW, it was a damn price gap for us, having the opportunity to switch to specialized hardware instead of EC2. Good luck
L0j1k source share