Allow docker container to connect to local postgres database

I recently played with Docker and QGIS and installed the container following the instructions in this guide .

Everything works fine, although I can’t connect to the local postgres database, which contains all my GIS data. I believe this is because my postgres database is not configured to accept remote connections and edits conf postgres files to allow remote connections using the instructions in this article .

I still get an error message when I try to connect to my database running QGIS in Docker: failed to connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections to port 5433? The postgres server is running, and I edited my pg_hba.conf file to allow connections from a range of IP addresses (172.17.0.0/32). I previously requested the IP address of the Docker container using docker ps and although the IP address is changing, it has always been in the range 172.17.0.x so far

Any ideas why I can't connect to this database? Probably something very simple I imagine!

I am using Ubuntu 14.04; Postgres 9.3

+95
docker ubuntu qgis
Jul 6 '15 at
source share
9 answers

TL; DR

  • Use 172.17.0.0/16 as the range of IP addresses, not 172.17.0.0/32 .
  • Do not use localhost to connect to the PostgreSQL database on your host, but instead instead of the host IP. To transfer the container, run the container with the flag --add-host=database:<host-ip> and use database as the host name to connect to PostgreSQL.
  • Make sure PostreSQL is configured to listen for connections on all IP addresses, not just localhost . Find the listen_addresses setting in the PostgreSQL configuration file, usually located in /etc/postgresql/9.3/main/postgresql.conf (@DazmoNorton credits).

Long version

172.17.0.0/32 not a range of IP addresses, but a single address (namly 172.17.0.0 ). The Docker container will never get this address because it is the network address of the Docker bridge interface ( docker0 ).

When Docker starts up, it will create a new bridge network interface, which you can easily see when calling ip a :

 $ ip a ... 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever 

As you can see, in my case the docker0 interface has an IP address of 172.17.42.1 with a 172.17.42.1 /16 (or 255.255.0.0 ). This means that the network address is 172.17.0.0/16 .

The IP address is assigned randomly, but without any additional configuration, it will always be on the 172.17.0.0/16 network. Each Docker container will be assigned a random address from this range.

This means that if you want to provide access from all possible containers to your database, use 172.17.0.0/16 .

+116
Jul 06 '15 at 15:05
source share

Docker Solution for Mac

June 17

Thanks to @Birchlabs comment, now this special DNS name is only available for Mac :

 docker run -e DB_PORT=5432 -e DB_HOST=docker.for.mac.host.internal 

Starting from 12.17.0-cd-mac46, docker.for.mac.host.internal should use docker.for.mac.localhost . See Release Note for details.

Old version

@Helmbert's answer explains the problem well. But Docker for Mac does not provide a bridged network , so I had to do this trick to get around the limitation:

 $ sudo ifconfig lo0 alias 10.200.10.1/24 

Open /usr/local/var/postgres/pg_hba.conf and add this line:

 host all all 10.200.10.1/24 trust 

Open /usr/local/var/postgres/postgresql.conf and edit the listen_addresses changes:

 listen_addresses = '*' 

Restart the service and start your container:

 $ PGDATA=/usr/local/var/postgres pg_ctl reload $ docker run -e DB_PORT=5432 -e DB_HOST=10.200.10.1 my_app 

What this workaround does is basically the same as @helmbert's answer, but uses an IP address that is attached to lo0 instead of the docker0 network interface.

+48
Jan 10 '17 at 5:57
source share

Simple solution for Mac:

The latest version of Docker (18.03) offers an integrated port forwarding solution. Inside your docker container, just set the db host to host.docker.internal . This will be redirected to the host on which the docker container is running.

The documentation for this is here: https://docs.docker.com/docker-for-mac/networking/#per-container-ip-addressing-is-not-possible

+26
May 17 '18 at 14:49
source share

A simple solution

Just add --network=host for docker run . All this!

Thus, the container will use the host network, so localhost and 127.0.0.1 will point to the host (by default, they point to the container). Example:

 docker run -d --network=host \ -e "DB_DBNAME=your_db" \ -e "DB_PORT=5432" \ -e "DB_USER=your_db_user" \ -e "DB_PASS=your_db_password" \ -e "DB_HOST=127.0.0.1" \ --name foobar foo/bar 
+3
Mar 20 '19 at 21:33
source share

In Ubuntu:

First you need to check if the Docker database port is accessible on your system using the following command -

 sudo iptables -L -n 

OUTPUT Sample:

 Chain DOCKER (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:3306 ACCEPT tcp -- 0.0.0.0/0 172.17.0.3 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 172.17.0.3 tcp dpt:22 

Here 3306 used as the Docker database port on the 172.17.0.2 IP, if this port is not available, run the following command -

 sudo iptables -A INPUT -p tcp --dport 3306 -j ACCEPT 

Now you can easily access the Docker database from your local system by following the configuration

  host: 172.17.0.2 adapter: mysql database: DATABASE_NAME port: 3307 username: DATABASE_USER password: DATABASE_PASSWORD encoding: utf8 

In CentOS:

First you need to check if the Docker database port is accessible in your firewall by running the following command -

 sudo firewall-cmd --list-all 

OUTPUT Sample:

  target: default icmp-block-inversion: no interfaces: eno79841677 sources: services: dhcpv6-client ssh **ports: 3307/tcp** protocols: masquerade: no forward-ports: sourceports: icmp-blocks: rich rules: 

Here 3307 used as the Docker database port on the 172.17.0.2 IP, if this port is not available, run the following command -

 sudo firewall-cmd --zone=public --add-port=3307/tcp 

On the server you can add a port forever

 sudo firewall-cmd --permanent --add-port=3307/tcp sudo firewall-cmd --reload 

Now you can easily access the Docker database from your local system using the above configuration.

+1
May 23 '18 at 7:06
source share

for docker-compose you can try just add

 network_mode: "host" 

example:

 version: '2' services: feedx: build: web ports: - "127.0.0.1:8000:8000" network_mode: "host" 

https://docs.docker.com/compose/compose-file/#network_mode

+1
May 27 '19 at 7:07
source share

Another thing needed for my installation was to add

 172.17.0.1 localhost 

to /etc/hosts

so that Docker points to 172.17.0.1 as the name of the database node and does not rely on changing the external ip to search for the database. Hope this helps someone else with this issue!

0
Mar 02 '17 at 7:02
source share

To set up something simple that allows Postgresql to connect from the Docker container to my local host, I used this in postgresql.conf:

 listen_addresses = '*' 

And added this pg_hba.conf:

 host all all 172.17.0.0/16 password 

Then restart your computer. My client from the Docker container (which was at 172.17.0.2) could then connect to Postgresql running on my local host using host: password, database, username and password.

0
Sep 19 '19 at 16:49
source share

Another solution is the level of service. You can define a service volume and set the PostgreSQL Host data directory on that volume. See this build file for details.

 version: '2' services: db: image: postgres:9.6.1 volumes: - "/var/lib/postgresql/data:/var/lib/postgresql/data" ports: - "5432:5432" 

Thus, another PostgreSQL service will run under the container, but it will use the same data directory that the PostgreSQL service uses.

-one
Mar 02 '17 at 7:59
source share



All Articles