I can come up with two solutions:
Use a common group identifier among all developers and images. Uid may end up being numeric in the container, but gid will provide at least read access and, optionally, write access without giving it globally. Use the setgid bit for the contained directories to automatically create files with this gid. This is not the cleanest approach, and it can lead to sharing with other members of the group, but it can be much easier depending on your organizationโs workflow.
The second option is called volumes, which, I think, were added after you asked this question. They allow you to have data with uid / gid, known containers. The disadvantage of this is moving data to the internal docker directories, where managing them outside the container is not so simple. However, there are microservice-based approaches that support synchronizing a volume with an external source (git pull, rsync, etc.) using a dedicated container that mounts the same volume. Essentially, you move all read and write operations to containers, including any backups, upgrade procedures, and test scripts.
Update: The third option, which I often use for development environments, is to run the entry point script as the root user, which compares the mounted uid / gid volume with the uid / gid of the user inside the container. If they do not match, the user uid / gid inside the container is updated to match the host. This allows developers to reuse the same image on multiple hosts, where the uid / gid of each developer may differ on their location computer. The code for this is included in my bin/fix-perms which is part of my base image . The last step of my Entrypoint script is to use gosu to drop the root back of the user, now with the changed UID / GID, and all files recorded will now match the user on the host.
If you are running MacOS, the recent osxfs function automatically fixes uid / gid mismatches with host volumes.
source share