One binary principle is explained here:
http://programmer.97things.oreilly.com/wiki/index.php/One_Binary claims to be ...
"Create a single binary file that can be identified and promoted at all stages of the release pipeline. Keep environment-specific parts in the environment. This could mean, for example, storing them in the component container in a known file, or on the way."
I see that many development engineers may violate this principle by creating a single docker image in the environment (i.e. my-app-qa, my-app-prod, etc.). I know that Docker supports an immutable infrastructure, which means that you donβt need to change the image after deployment, so donβt load or load the configuration after deployment. Is there a trade-off between immutable infrastructure and one binary principle, or can they complement each other? When it comes to the configuration branch from code, what is the best practice in the Docker world? Which of the following approaches should be taken ...
1) Creating a basic binary image, and then having a Docker configuration file that enlarges that image by adding an environment-specific configuration. (e.g. my-app -> my-app-prod)
2) Deploying the docker image only for binary objects and passing it to the configuration via environment variables, etc. during deployment.
3) Load configuration after deploying Docker file to container
4) Downloading the configuration from the configuration management server from the image of the running docker in the container.
5) Saving the configuration in the host environment and its availability for the executable Docker instance through binding binding.
Is there any other better approach not mentioned above?
How can one binary principle be implemented using an immutable infrastructure? Can this be done or is there a compromise? What is the best practice?