Using AWS EFS with Docker

I am using the new elastic file system provided by amazon on my only EB container. I cannot understand why mounted EFS cannot be mapped to a container.

The EFS console successfully runs on the at / efs-mount-point host.

Courtesy of Dockerrun.aws.json

{ "AWSEBDockerrunVersion": "1" "Volumes": [ { "HostDirectory": "/efs-mount-point", "ContainerDirectory": "/efs-mount-point" } ] } 

This volume is then created in the container after it is launched. However, he mapped the hosts / efs-mount-point directory, not the actual EFS mount point. I cannot figure out how to get Docker to display in an EFS volume mounted in / efs -mount-point instead of the host directory.

Can NFS ports work well with Docker?

+6
source share
3 answers

You need to restart the docker after installing the EFS volume in the host EC2 instance.

Here is an example, .ebextensions/efs.config :

 commands: 01mkdir: command: "mkdir -p /efs-mount-point" 02mount: command: "mountpoint -q /efs-mount-point || mount -t nfs4 -o nfsvers=4.1 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).fs-fa35c253.efs.us-west-2.amazonaws.com:/ /efs-mount-point" 03restart: command: "service docker restart" 
+7
source

EFS with AWS Beanstalk - A multi-docker will work. But many of them will stop working because you have to restart the docker after installing EFS.

Instance Commands

Searching around you may necessitate a "docker reboot" after installing EFS. It is not so easy. You will encounter problems when autoscaling and / or when deploying a new version of the application.

The following is the script that I use to install EFS in a docker instance where the following steps are needed:

  • Stop ECS Manager. It takes 60 seconds.
  • Stop Docker Service
  • Kill the remaining dockers.
  • Delete previous network bindings. See Question https://github.com/docker/docker/issues/7856#issuecomment-239100381
  • Mount efs
  • Launch the docker service.
  • Start ECS
  • Wait 120 seconds. Bringing ECS ​​to the correct start / * state. Else, for example. 00enact script will fail. Please note that this screen is mandatory and it is really difficult to find documentation.

Here is my script:

.ebextensions/commands.config :

 commands: 01stopdocker: command: "sudo stop ecs > /dev/null 2>&1 || /bin/true && sudo service docker stop" 02killallnetworkbindings: command: 'sudo killall docker > /dev/null 2>&1 || /bin/true' 03removenetworkinterface: command: "rm -f /var/lib/docker/network/files/local-kv.db" test: test -f /var/lib/docker/network/files/local-kv.db # Mount the EFS created in .ebextensions/media.config 04mount: command: "/tmp/mount-efs.sh" # On new instances, delay needs to be added because of 00task enact script. It tests for start/ but it can be various states of start... # Basically, "start ecs" takes some time to run, and it runs async - so we sleep for some time. # So basically let the ECS manager take it time to boot before going on to enact scritps and post deploy scripts. 09restart: command: "service docker start && sudo start ecs && sleep 120s" 

Console script and environment variables

.ebextensions/mount-config.config

 # efs-mount.config # Copy this file to the .ebextensions folder in the root of your app source folder option_settings: aws:elasticbeanstalk:application:environment: EFS_REGION: '`{"Ref": "AWS::Region"}`' # Replace with the required mount directory EFS_MOUNT_DIR: '/efs_volume' # Use in conjunction with efs_volume.config or replace with EFS volume ID of an existing EFS volume EFS_VOLUME_ID: '`{"Ref" : "FileSystem"}`' packages: yum: nfs-utils: [] files: "/tmp/mount-efs.sh": mode: "000755" content : | #!/bin/bash EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_REGION') EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_MOUNT_DIR') EFS_VOLUME_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_VOLUME_ID') echo "Mounting EFS filesystem ${EFS_DNS_NAME} to directory ${EFS_MOUNT_DIR} ..." echo 'Stopping NFS ID Mapper...' service rpcidmapd status &> /dev/null if [ $? -ne 0 ] ; then echo 'rpc.idmapd is already stopped!' else service rpcidmapd stop if [ $? -ne 0 ] ; then echo 'ERROR: Failed to stop NFS ID Mapper!' exit 1 fi fi echo 'Checking if EFS mount directory exists...' if [ ! -d ${EFS_MOUNT_DIR} ]; then echo "Creating directory ${EFS_MOUNT_DIR} ..." mkdir -p ${EFS_MOUNT_DIR} if [ $? -ne 0 ]; then echo 'ERROR: Directory creation failed!' exit 1 fi chmod 777 ${EFS_MOUNT_DIR} if [ $? -ne 0 ]; then echo 'ERROR: Permission update failed!' exit 1 fi else echo "Directory ${EFS_MOUNT_DIR} already exists!" fi mountpoint -q ${EFS_MOUNT_DIR} if [ $? -ne 0 ]; then AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone) echo "mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}" mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR} if [ $? -ne 0 ] ; then echo 'ERROR: Mount command failed!' exit 1 fi else echo "Directory ${EFS_MOUNT_DIR} is already a valid mountpoint!" fi echo 'EFS mount complete.' 

Resource and configuration

You will need to change option_settings below. To find the VPC and subnets that you need to define in the options_settings section below, look at the AWS β†’ VPC web console, there you should find the default VPC ID and 3 default subnet IDs. If your beanstalk uses a custom VPC, you must use these settings.

.ebextensions/efs-volume.config :

 # efs-volume.config # Copy this file to the .ebextensions folder in the root of your app source folder option_settings: aws:elasticbeanstalk:customoption: EFSVolumeName: "EB-EFS-Volume" VPCId: "vpc-xxxxxxxx" SubnetUSWest2a: "subnet-xxxxxxxx" SubnetUSWest2b: "subnet-xxxxxxxx" SubnetUSWest2c: "subnet-xxxxxxxx" Resources: FileSystem: Type: AWS::EFS::FileSystem Properties: FileSystemTags: - Key: Name Value: Fn::GetOptionSetting: {OptionName: EFSVolumeName, DefaultValue: "EB_EFS_Volume"} MountTargetSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Security group for mount target SecurityGroupIngress: - FromPort: '2049' IpProtocol: tcp SourceSecurityGroupId: Fn::GetAtt: [AWSEBSecurityGroup, GroupId] ToPort: '2049' VpcId: Fn::GetOptionSetting: {OptionName: VPCId} MountTargetUSWest2a: Type: AWS::EFS::MountTarget Properties: FileSystemId: {Ref: FileSystem} SecurityGroups: - {Ref: MountTargetSecurityGroup} SubnetId: Fn::GetOptionSetting: {OptionName: SubnetUSWest2a} MountTargetUSWest2b: Type: AWS::EFS::MountTarget Properties: FileSystemId: {Ref: FileSystem} SecurityGroups: - {Ref: MountTargetSecurityGroup} SubnetId: Fn::GetOptionSetting: {OptionName: SubnetUSWest2b} MountTargetUSWest2c: Type: AWS::EFS::MountTarget Properties: FileSystemId: {Ref: FileSystem} SecurityGroups: - {Ref: MountTargetSecurityGroup} SubnetId: Fn::GetOptionSetting: {OptionName: SubnetUSWest2c} 

Resources

+5
source

AWS has instructions for automatically creating and mounting EFS on an elastic beanstalk. You can find them here.

These instructions refer to two configuration files that you need to configure and put in the .ebextensions folder of your deployment package.

The storage-efs-mountfilesystem.config file must be further modified to work with Docker containers. Add the following command:

 02_restart: command: "service docker restart" 

And for multi-container environments, you need to restart the Elastic Container Service (it was killed when the docker was restarted above):

 03_start_eb: command: | start ecs start eb-docker-events sleep 120 test: sh -c "[ -f /etc/init/ecs.conf ]" 

so the full section of storage-efs-mountfilesystem.config commands :

 commands: 01_mount: command: "/tmp/mount-efs.sh" 02_restart: command: "service docker restart" 03_start_eb: command: | start ecs start eb-docker-events sleep 120 test: sh -c "[ -f /etc/init/ecs.conf ]" 

The reason this doesn't work out of the box is because the docker daemon is started by an EC2 instance before running commands in .ebextensions. Launch order:

  • run docker daemon
  • In multi-container docker environments, run the elastic container service agent
  • Run Commands in .ebextensions
  • Run container application (s)

The first step captures the file system view that the docker daemon provides for containers. Therefore, the changes to the host file systems made in step 3 are not displayed in the container view.

One strange effect is that the container sees the mount point before the file system is installed on the host. The host sees the mounted file system. Therefore, the file written by the container will be written to the host directory under the mounted directory, and not with the installed file system. Unmounting the file system on the EC2 host will display the container files written to the mount directory.

+3
source

All Articles