I have a slightly annoying problem when using the Docker container (I'm on Ubuntu, so there is no virtualization like VMWare or b2d). I created my image and I have a container in which there is one shared (mounted) directory from my host and one shared (mounted) file from my host. Here's the docker run command in its entirety:
docker run -dit \ -p 80:80 \ --name my-container \ -v $(pwd)/components:/var/www/components \ -v $(pwd)/index.php:/var/www/index.php \ my-image
This works fine, and both /components (and its contents) and the file are shared. However, when I want to make changes to a directory (for example, add a new file or folder) or edit a mounted file (or any file in a directory), I cannot do this due to incorrect permissions. Running ls- lFh shows that owner and group for mounted items have been changed to libuuid:libuuid . Changing the file directory or parent roots requires root privileges, which interferes with my workflow (since I work from Sublime Text, not Terminal, I get a pop-up window for admin privs).
Why is this happening? How can I get around this / handle it correctly? From Managing Data Volumes: Installing the host file as a data volume :
Note. Many tools used to edit files, including vi and sed - in-place, can lead to index changes. Starting with Docker v1.1.0, this will result in an error such as "sed: cannot rename. / SedKdJ 9Dy: busy or busy." In case you want to edit the mounted file, it is often easiest to set the parent directory.
This seems to suggest that instead of installing /components and /index.php , I should instead set the parent directory of both. This sounds great in theory, but based on the behavior of the -v and how it interacts with /directory , it would seem that every file in the parent directory will be changed to belong to libuuid:libuuid . In addition, I have many things inside the parent directory that are not needed in the container - things like build tools, various files, some compressed folders, etc. Installing the entire parent directory will seem wasteful.
Running chown user:group on /components and /index.php on my host machine allows me to get around this and seems to continue synchronizing with the container. Do I need to do this every time I start a container with host volumes installed? I suggest that there is a more efficient way to do this, and I simply cannot find an explanation for my particular use case anywhere.
I use this container to develop a module for another program and have no desire to manage the container for data only - the only files that matter are my hosts; persistence is not required anywhere (e.g. database, etc.).
After creating the image, this is the start command that I use:
docker run -dit \ -p 80:80 \ --name my-container \ -v $(pwd)/components:/var/www/wp-content/plugins/my-plugin-directory/components \ -v $(pwd)/index.php:/var/www/wp-content/plugins/my-plugin-directory/index.php \ my-image