Is it safe to store User-Data EC2 shell scripts in a private S3 bucket?

I have an AS2 ASG on AWS and I am interested in storing a shell script that was used to instantiate any instance in the S3 bucket and it loaded and started when the instance was created, but it all feels a bit shaky even though I use IAM Instance Role by transmitting via HTTPS and encrypting the script itself, being at rest in the bucket <3> using KMS using S3 Server Side Encryption ( because the KMS method selected an "Unknown" error ).

Customization

  • IAM Instance Role , which is assigned to any instance in my ASG after creating the instance, as a result, my AWS credits are baked into the instance as ENV vars
  • My Instance-Init.sh script is uploaded and encrypted to S3, which leads to a private endpoint, for example: https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh

In the User-Data field

I enter the following into the User Data field when creating the Launch Configuration I want my ASG to use:

 #!/bin/bash apt-get update apt-get -y install python-pip apt-get -y install awscli cd /home/ubuntu aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1 chmod +x Instance-Init.sh . Instance-Init.sh shred -u -z -n 27 Instance-Init.sh 

The above does the following:

  • Service Pack Lists
  • Installs Python (required to run aws-cli )
  • Installs aws-cli
  • Changes to the /home/ubuntu directory
  • Uses aws-cli to download the Instance-Init.sh file from S3 . Due to the IAM Role assigned to my instance, my AWS credits are automatically detected by aws-cli . IAM Role also provides my instance with the permissions necessary to decrypt the file.
  • Makes it executable
  • Runs the script
  • Deletes a script after its completion.

Instance-Init.sh script

the script itself will do things like installing ENV vars and docker run containers, which I need to deploy on my instance. Kind:

 #!/bin/bash export MONGO_USER='MyMongoUserName' export MONGO_PASS='Top-Secret-Dont-Tell-Anyone' docker login -u <username> -p <password> -e <email> docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest 


Very comfortably

This creates a very convenient way to update User-Data scripts without having to create a new Launch Config every time you need to make minor changes. And it does an excellent job of getting ENV vars from your code base and into a narrow controlled space ( Instance-Init.sh script itself).

But it all feels a little insecure. The idea of ​​putting my master DB files in a file on S3 is a concern, to say the least.

Questions

  • Is this a common practice or am I seeing a bad idea here?
  • Is the fact that the file is downloaded and stored (albeit not for long) in a new instance is a vulnerability in general?
  • Is there a better way to delete a file in a more secure way?
  • Does it matter if the file is deleted after it is launched? Given that secrets are passed to ENV vars, it seems almost unnecessary to delete the Instance-Init.sh file.
  • Is there something I miss in my nascent days?

Thanks for any help in advance.

+17
bash shell amazon-s3 amazon-ec2
Apr 29 '15 at 0:36
source share
2 answers

What you are describing is almost what we use to instantiate Docker containers from our registry (now we use v2 self-hosting / private, s3-backed-docker-registry instead of Quay). FWIW, I had the same feeling “it seems risky” that you describe when you first step on this path, but a year after that - and compare it with an alternative to storing this confidential configuration data in a repo or baked in image - I'm sure That is one of the best ways to process this data. Now that we are talking, we are currently looking at using Hashicorp's new Vault software to deploy configuration secrets to replace this “shared” encrypted secret shell script container (say, five times faster). We think Vault will be the equivalent of outsourcing cryptography to the open source community (where it belongs), but to store the configuration.

In fewer words, we don’t encounter many problems with a very similar situation that we use throughout the year, but now we are considering using an external open source project (Hashicorp Vault) to replace our homegrown method. Good luck

+5
May 7, '15 at 21:43
source share

An alternative to Vault is to use credstash , which uses AWS KMS and DynamoDB to achieve a similar goal.

I actually use credstash to dynamically import sensitive configuration data when starting the container through a simple script entry point - this way, sensitive data is not displayed through docker checking or in docker logs, etc.

Here's an example script entry point (for a Python application) - the beauty here is that you can still pass credentials through environment variables for environments other than AWS / dev.

 #!/bin/bash set -e # Activate virtual environment . /app/venv/bin/activate # Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq # AWS_DEFAULT_REGION must also be set # Note values are Base64 encoded in this example if [[ -n $CREDENTIAL_STORE ]]; then items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r) keys=$(echo $items | jq .key -r) for key in $keys do export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode) done fi exec $@ 
+1
Mar 26 '16 at 3:52
source share



All Articles