Filling Docker Containers with Confidential Information Using Kubernets

I have a module that launches containers that require access to sensitive information, such as API keys and database passwords. Right now, these sensitive values ​​are built into the controller definitions as follows:

env: - name: DB_PASSWORD value: password 

which are then available inside the Docker container as the $DB_PASSWORD environment variable. Everything is pretty easy.

But by reading their Secrets documentation, they clearly state that transferring sensitive configuration values ​​to your definition violates best practice and potentially a security issue. The only other strategy I can think of is as follows:

  • create an OpenPGP key for each user community or namespace
  • use crypt to set the configuration value to etcd (which is encrypted using the private key)
  • create a kubernet secret containing the private key, so
  • associate this secret with the container (this means that the private key will be available as mount for the volume), like this
  • when the container is running, it will access the file inside the hard drive for the private key and use it to decrypt the conf values ​​returned from etcd
  • this can then be included in confd , which populates local files according to the template definition (for example, Apache or WordPress configuration files)

This seems rather complicated, but more secure and flexible, since the values ​​will no longer be static and are stored in clear text.

So, my question is, and I know that this is not a completely objective question, is this completely necessary or not? Only admins will be able to view and execute RC definitions in the first place; therefore, if someone violated the master of the kubernetes, you will have other problems. The only advantage that I see is that there is no danger of secrets related to the plaintext file system ...

Are there other ways to populate Docker containers with secret information in a safe way?

+8
security docker kubernetes confd
source share
2 answers

If you have a lot of megabytes of configuration, this system seems unnecessarily complicated. The intended use is that you simply put each config in a secret code, and the pods that need a config can mount this secret as a volume.

Then you can use any of a variety of mechanisms to transfer this configuration to your task, for example. if the environment variables source secret/config.sh; ./mybinary source secret/config.sh; ./mybinary are an easy way.

I do not think that you get extra protection by keeping the secret key as a secret.

+4
source share

I would personally allow the user a remote keymanager accessible by your software over the network via an HTTPS connection. For example, Keywhiz or Vault is likely to match the score.

I would place the keymanager on a separate, isolated subnet and set up a firewall only to access the IP addresses that I expected needed keys. Both KeyWhiz and Vault come with an ACL mechanism, so you don’t have to do anything with firewalls at all, but it doesn’t hurt to consider it — however, the key here is to place the keymanager on a separate network and possibly even a separate hosting provider.

The local configuration file in the container will contain only the URL of the key service and credentials for retrieving the key from the key manager are possible - the credentials will be useless to the attacker if it does not match the ACL / IP address.

+2
source share

All Articles