Location of data volumes in Docker Desktop (Windows)

Now I'm trying to learn docker, and I'm confused about where the data volumes actually exist.

I am using Docker Desktop for Windows . (Windows 10)

The documents say that when you run a docker check on the object, you will be given the source: https://docs.docker.com/engine/tutorials/dockervolumes/#locating-a-volume

$ docker inspect web "Mounts": [ { "Name": "fac362...80535", "Source": "/var/lib/docker/volumes/fac362...80535/_data", "Destination": "/webapp", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ] 

however I do not see this, I get the following:

 $ docker inspect blog_postgres-data [ { "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/blog_postgres-data/_data", "Name": "blog_postgres-data", "Options": {}, "Scope": "local" } ] 

Can someone help me? I just want to know where my data volume actually exists on my host machine? If so, how can I get the path to it?

+23
docker docker-for-windows docker-desktop
source share
3 answers

Your volume directory is located in /var/lib/docker/volumes/blog_postgres-data/_data and /var/lib/docker usually mounted in C:\Users\Public\Documents\Hyper-v\Virtual hard disks . In any case, you can verify this by looking in the Docker settings.

You can refer to these documents for information on how to share drives with Docker on Windows.

By the way, Source is the location on the host, and Destination is the location inside the container in the following output:

 "Mounts": [ { "Name": "fac362...80535", "Source": "/var/lib/docker/volumes/fac362...80535/_data", "Destination": "/webapp", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ] 

=================================================== ===========================

Updated to answer questions in the comment:

My main curiosity is that sharing images, etc. Great, but how do I share my data?

Actually volume designed for this purpose (manage data in a Docker container). The data in the volume is stored on the FS host and isolated from the lifecycle of the container / Docker image. You can share your data in the volume:

  • Connect the Docker volume to the host and reuse it

    docker run -v/path/on/host: /path/inside/container image

    Then all your data will be saved in /path/on/host ; You can create a backup, copy it to another computer and restart the container with the same volume.

  • Create and mount the data container.

    Create a data container: docker create -v/dbdata --name dbstore training/postgres/bin/true

    Start other containers based on this container using --volumes-from : docker run -d --volumes-from dbstore --name db1 training/postgres , then all the data generated by db1 will be stored in the volume of the dbstore container.

For more information, you can refer to the official Docker volume documents .

Simply put, volumes is just a directory on your host with all the data in your container, so you can use any method you used previously to back up / share your data.

Can I transfer the volume to the docker hub, as I do with images?

Not. A Docker image is something that you can transfer to a Docker hub (aka β€œregistry”); but no data. You can backup / save / share your data in any way that you like, but sending data to the Docker registry for sharing does not make any sense.

can i make backups etc?

Yes, as written above :-)

+22
source share

Each container has its own file system, which is independent of the host file system. If you run your container with the -v flag, you can mount the volumes so that the host and the container display the same data (for example, when docker -v starts hostFolder: containerFolder).

The first output you print describes such a mounted volume (therefore, mounted), where "/var/lib/docker/volumes/fac362...80535/_data" (host) is mounted in "/ webapp" (container).

I assume that you did not use -v, so the folder is not mounted and is only available in the container file system, which you can find in "/ var / lib / docker / volume / blog_postgres-data / _data". This data will be deleted if you delete the container (docker -rm), so it might be a good idea to set the folder.

Regarding the question of where you can access this data from windows. As far as I know, docker for Windows uses the bash subsystem in Windows 10. I would try to run bash for windows10 and go to this folder or learn how to access linux folders from Windows 10. Check this page for frequently asked questions in the linux subsystem in Windows 10

Update. You can also use docker cp to copy files between the host and the container.

0
source share

Mounting any NTFS-based directories did not work for my purpose (MongoDB - as far as I know, this also applies to Redis and CouchDB, at least): NTFS permissions did not allow the necessary access for such databases running in containers. The following is a setup with named volumes in HyperV.

The following approach starts the ssh server inside the service, configuring it with docker-compse so that it starts automatically and uses public key encryption between the host and the container for authorization. Thus, data can be uploaded / uploaded via scp or sftp.

Below is the complete docker-compose.yml file for webapp + mongodb, as well as some documentation on using the ssh service:

 version: '3' services: foo: build: . image: localhost.localdomain/${repository_name}:${tag} container_name: ${container_name} ports: - "3333:3333" links: - mongodb-foo depends_on: - mongodb-foo - sshd volumes: - "${host_log_directory}:/var/log/app" mongodb-foo: container_name: mongodb-${repository_name} image: "mongo:3.4-jessie" volumes: - mongodata-foo:/data/db expose: - '27017' #since mongo data on Windows only works within HyperV virtual disk (as of 2019-4-3), the following allows upload/download of mongo data #setup: you need to copy your ~/.ssh/id_rsa.pub into $DOCKER_DATA_DIR/.ssh/id_rsa.pub, then run this service again #download (all mongo data): scp -r -P 2222 user@localhost :/data/mongodb [target-dir within /c/] #upload (all mongo data): scp -r -P 2222 [source-dir within /c/] user@localhost :/data/mongodb sshd: image: maltyxx/sshd volumes: - mongodata-foo:/data/mongodb - $DOCKER_DATA_DIR/.ssh/id_rsa.pub:/home/user/.ssh/keys/id_rsa.pub:ro ports: - "2222:22" command: user::1001 #please note: using a named volume like this for mongo is necessary on Windows rather than mounting an NTFS directory. #mongodb (and probably most other databases) are not compatible with windows native data directories due ot permissions issues. #this means that there is no direct access to this data, it needs to be dumped elsewhere if you want to reimport something. #it will however be persisted as long as you don't delete the HyperV virtual drive that docker host is using. #on Linux and Docker for Mac it is not an issue, named volumes are directly accessible from host. volumes: mongodata-foo: 

this is not relevant, but for a fully working example, before any call to docker-compose, you must run the following script:

 #!/usr/bin/env bash set -o errexit set -o pipefail set -o nounset working_directory="$(pwd)" host_repo_dir="${working_directory}" repository_name="$(basename ${working_directory})" branch_name="$(git rev-parse --abbrev-ref HEAD)" container_name="${repository_name}-${branch_name}" host_log_directory="${DOCKER_DATA_DIR}/log/${repository_name}" tag="${branch_name}" export host_repo_dir export repository_name export container_name export tag export host_log_directory 
0
source share

All Articles