How to configure Docker port mapping to use Nginx as a top proxy?

Update II

Now, July 16, 2015, everything has changed. I opened this automatic container from Jason Wilder : https://github.com/jwilder/nginx-proxy , and it solves this problem about as long as the docker run container is required. This is the solution that I use to solve this problem.

Update

Now in July 2015, and the situation has radically changed from to Docker network containers. Currently, there are many different suggestions for solving this problem (in various ways).

You should use this post to get a basic understanding of the docker --link service discovery approach, which is about as basic as it works very well and actually requires less fancy dancing than most other solutions. It is limited in that it is rather difficult to connect containers on separate hosts in any given cluster, and containers cannot be restarted after connecting to the network, but they offer a quick and relatively simple way to connect containers to the same host. This is a good way to get the idea that the software that you are most likely to use to solve this problem is actually running under the hood.

In addition, you'll probably want to check out the Docker nascent network , Hashicorp consul , Weaveworks weave , Jeff Lindsay progrium/consul and gliderlabs/registrator , and Google Kubernetes .

CoreOS offers are also offered, which use etcd , fleet and flannel .

And if you really want to have a party, you can deploy a cluster to launch Mesosphere or Deis , or Flynn .

If you are new to the network (for example, me), then you should get your reading glasses, pop "Paint The Sky With Stars - The Best of Enya" on Wi-Hi-Fi, and crack the beer - this will be the time before you really will understand what exactly you are trying to do. Hint: You are trying to implement Service Discovery Layer in your Cluster Control Plane . This is a very good way to spend Saturday night.

This is a lot of fun, but I'm sorry that I did not find the time to better talk about the network as a whole before diving in. In the end, I found a couple of posts from the benevolent deities of Digital Ocean Tutorial: Introduction to Networking Terminology and Understanding ... Networking . I suggest reading them several times before diving.

Good luck



Original post

I cannot figure out how to display the port for Docker containers. In particular, how to transfer requests from Nginx to another container while listening to another port on the same server.

I have a Dockerfile for a Nginx container, for example:

 FROM ubuntu:14.04 MAINTAINER Me <me@myapp.com> RUN apt-get update && apt-get install -y htop git nginx ADD sites-enabled/api.myapp.com /etc/nginx/sites-enabled/api.myapp.com ADD sites-enabled/app.myapp.com /etc/nginx/sites-enabled/app.myapp.com ADD nginx.conf /etc/nginx/nginx.conf RUN echo "daemon off;" >> /etc/nginx/nginx.conf EXPOSE 80 443 CMD ["service", "nginx", "start"] 



And then the api.myapp.com configuration api.myapp.com looks like this:

 upstream api_upstream{ server 0.0.0.0:3333; } server { listen 80; server_name api.myapp.com; return 301 https://api.myapp.com/$request_uri; } server { listen 443; server_name api.mypp.com; location / { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_cache_bypass $http_upgrade; proxy_pass http://api_upstream; } } 

And one more for app.myapp.com .

And then I run:

 sudo docker run -p 80:80 -p 443:443 -d --name Nginx myusername/nginx 


And all this costs fine, but requests are not transferred to other containers / ports. And when I ssh into the Nginx container and check the logs, I see no errors.

Any help?

+69
docker nginx
Jan 13 '15 at 0:03
source share
5 answers

@ The answer of T0xicCode is correct, but I thought I would talk in detail about the details, as it actually took me about 20 hours to finally get a working solution.

If you want to run Nginx in your own container and use it as a reverse proxy to load multiple applications on the same server instance, then the steps you need to follow are as follows:

Tie your containers

When you docker run your containers, typically by entering a shell script in User Data , you can declare links to other running containers. This means that you need to start your containers in order, and only the last containers can communicate with the previous ones. For example:

 #!/bin/bash sudo docker run -p 3000:3000 --name API mydockerhub/api sudo docker run -p 3001:3001 --link API:API --name App mydockerhub/app sudo docker run -p 80:80 -p 443:443 --link API:API --link App:App --name Nginx mydockerhub/nginx 

So, in this example, the API container is not associated with any other, but the App container is associated with the API , and Nginx is associated with both the API and the App .

This results in changes to the env and /etc/hosts files that are in the API and App containers. The results look like this:

/ etc / hosts

Running cat /etc/hosts in your Nginx container will result in the following:

 172.17.0.5 0fd9a40ab5ec 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.3 App 172.17.0.2 API 



ENV Vars

Running env in your Nginx container will result in the following:

 API_PORT=tcp://172.17.0.2:3000 API_PORT_3000_TCP_PROTO=tcp API_PORT_3000_TCP_PORT=3000 API_PORT_3000_TCP_ADDR=172.17.0.2 APP_PORT=tcp://172.17.0.3:3001 APP_PORT_3001_TCP_PROTO=tcp APP_PORT_3001_TCP_PORT=3001 APP_PORT_3001_TCP_ADDR=172.17.0.3 

I cut back many of the actual vars, but the above are the key values ​​needed for proxy traffic in your containers.

To get a shell to run the above commands in a running container, use the following:

sudo docker exec -i -t Nginx bash

You can see that now you have both the /etc/hosts and env vars entries that contain the local IP address for any of the containers associated with it. As far as I can tell, this is all that happens when you start containers with declared link options. But now you can use this information to configure Nginx in the Nginx container.



Configure Nginx

Here it gets a little complicated, and there are several options. You can configure your sites to point to the entry in the /etc/hosts that docker created, or you can use vars env and run a line replacement (I used sed ) on your nginx.conf and any other conf files that might be in your /etc/nginx/sites-enabled folder to insert IP values.



OPTION A: Configuring Nginx Using ENV Vars

This is the option I went with because I could not get the /etc/hosts version of the file to work with. I will try option B soon and update this entry with any results.

The key difference between this option and the /etc/hosts option is how you write your Dockerfile to use a shell script as a CMD argument, which in turn handles a replacement string to copy the IP address of the value from env to your conf file .

Here is the set of configuration files I got into:

Dockerfile

 FROM ubuntu:14.04 MAINTAINER Your Name <you@myapp.com> RUN apt-get update && apt-get install -y nano htop git nginx ADD nginx.conf /etc/nginx/nginx.conf ADD api.myapp.conf /etc/nginx/sites-enabled/api.myapp.conf ADD app.myapp.conf /etc/nginx/sites-enabled/app.myapp.conf ADD Nginx-Startup.sh /etc/nginx/Nginx-Startup.sh EXPOSE 80 443 CMD ["/bin/bash","/etc/nginx/Nginx-Startup.sh"] 

nginx.conf

 daemon off; user www-data; pid /var/run/nginx.pid; worker_processes 1; events { worker_connections 1024; } http { # Basic Settings sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 33; types_hash_max_size 2048; server_tokens off; server_names_hash_bucket_size 64; include /etc/nginx/mime.types; default_type application/octet-stream; # Logging Settings access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; # Gzip Settings gzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 3; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/xml text/css application/x-javascript application/json; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # Virtual Host Configs include /etc/nginx/sites-enabled/*; # Error Page Config #error_page 403 404 500 502 /srv/Splash; } 

NOTE. It is important that in nginx.conf daemon off; file daemon off; daemon off; turned on daemon off; so that your container does not exit immediately after launch.

api.myapp.conf

 upstream api_upstream{ server APP_IP:3000; } server { listen 80; server_name api.myapp.com; return 301 https://api.myapp.com/$request_uri; } server { listen 443; server_name api.myapp.com; location / { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_cache_bypass $http_upgrade; proxy_pass http://api_upstream; } } 

Nginx-Startup.sh

 #!/bin/bash sed -i 's/APP_IP/'"$API_PORT_3000_TCP_ADDR"'/g' /etc/nginx/sites-enabled/api.myapp.com sed -i 's/APP_IP/'"$APP_PORT_3001_TCP_ADDR"'/g' /etc/nginx/sites-enabled/app.myapp.com service nginx start 

I will leave everything to do homework about most of the contents of nginx.conf and api.myapp.conf .

The magic happens in Nginx-Startup.sh , where we use sed to replace the string with APP_IP , which we wrote in the upstream block of our api.myapp.conf and app.myapp.conf files.

This ask.ubuntu.com question explains this very nicely: Find and replace text in a file using commands

GOTCHA On OSX sed handles options differently, the -i flag. On Ubuntu, the -i flag will handle in-place replacements; This will open the file, change the text, and then save its file. In OSX, the -i flag requires the file extension to which you want to receive the received file. If you are working with a file that does not have an extension, you must enter '' as the value for the -i flag.

GOTCHA To use the ENV lines in a regular expression that sed uses to find the line you want to replace, you must wrap var in double quotes. Thus, the correct, albeit attractive, syntax is as described above.

So, docker launched our container and launched the Nginx-Startup.sh script, which used sed to change the APP_IP value to the corresponding env variable specified in the sed command. Now we have the conf files in our directory /etc/nginx/sites-enabled , which have IP addresses from env vars that are set when the container loads. In your api.myapp.conf file api.myapp.conf you will see that the upstream block has changed to this:

 upstream api_upstream{ server 172.0.0.2:3000; } 

The IP address you see may be different, but I noticed that it is usually 172.0.0.x

You should now have all the routing appropriately.

GOTCHA You cannot restart / restart all containers after starting the launch of the original instance. Docker provides each container with a new IP address at startup and does not seem to use it before. So api.myapp.com will get 172.0.0.2 the first time, and then will get 172.0.0.4 the next time. But Nginx already set the first IP address in its conf files or in the /etc/hosts , so it will not be able to determine the new IP for api.myapp.com . The solution to this is most likely to use CoreOS and its etcd service, which in my limited understanding acts as a common env for all machines registered in the same CoreOS cluster. This is the next toy I'm going to play with customization.



OPTION B: Use /etc/hosts File Entries

It should be a quick and easy way to do this, but I couldn't get it to work. Allegedly, you simply enter the value of the /etc/hosts entry into your api.myapp.conf and app.myapp.conf , but I could not get this method to work.

UPDATE: See @Wes Tod answer for instructions on how to make this method work.

Here is the attempt I made in api.myapp.conf :

 upstream api_upstream{ server API:3000; } 

Given that there is an entry in my /etc/hosts , for example: 172.0.0.2 API I realized that it would just pull the value, but it looks like it is not.

I also had some additional problems with my Elastic Load Balancer source from all AZs, so maybe this was a problem when I tried this route. Instead, I had to learn how to handle string replacement on Linux, so it was fun. I will give it a try after a while and see how this happens.

+52
Jan 18 '15 at 19:47
source share

Using docker links , you can associate the container upstream with the nginx container. The added feature is that the docker manages the host file, which means that you can access the associated container using the name, not a potentially random ip.

+9
Jan 13 '15 at 0:10
source share

AJB "Option B" can be made to work by using the Ubuntu base image and configuring nginx yourself. (This did not work when I used the Nginx image from the Docker Hub.)

Here is the Docker file I used:

 FROM ubuntu RUN apt-get update && apt-get install -y nginx RUN ln -sf /dev/stdout /var/log/nginx/access.log RUN ln -sf /dev/stderr /var/log/nginx/error.log RUN rm -rf /etc/nginx/sites-enabled/default EXPOSE 80 443 COPY conf/mysite.com /etc/nginx/sites-enabled/mysite.com CMD ["nginx", "-g", "daemon off;"] 

My nginx config (aka: conf / mysite.com):

 server { listen 80 default; server_name mysite.com; location / { proxy_pass http://website; } } upstream website { server website:3000; } 

And finally, how I run my containers:

 $ docker run -dP --name website website $ docker run -dP --name nginx --link website:website nginx 

This made me start and run, so my nginx pointed upstream to the second docker container, which opened port 3000.

+7
Mar 23 '15 at 0:45
source share

I tried using the popular Jason Wider reverse proxy, which works in code for everyone, and finds out that it does not work for everyone (i.e.: me). And I'm new to NGINX, and I did not like the fact that I did not understand the technologies that I was trying to use.

I need to add my 2 cents, because the discussion above around linking containers is now deprecated, as this is an obsolete feature. So, explain how to do this using networks . This answer is a complete example of configuring nginx as a reverse proxy to a statically unloaded website using the Docker Compose and nginx configuration.

TL; DR;

Add services that should communicate with each other on a predefined network. For a step-by-step discussion of Docker networks, I learned something here: https://technologyconversations.com/2016/04/25/docker-networking-and-dns-the-good-the-bad-and-the-ugly/

Identify network

First of all, we need a network on which all your backend services can talk. I called my web , but it may be what you want.

 docker network create web 

Create application

We will just make a simple website app. The website is a simple index.html page served by a nginx container. Content is the mounted volume for the host under the content folder

DockerFile:

 FROM nginx COPY default.conf /etc/nginx/conf.d/default.conf 

default.conf

 server { listen 80; server_name localhost; location / { root /var/www/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } 

Docker-compose.yml

 version: "2" networks: mynetwork: external: name: web services: nginx: container_name: sample-site build: . expose: - "80" volumes: - "./content/:/var/www/html/" networks: default: {} mynetwork: aliases: - sample-site 

Note that port mapping is no longer required here. We simply set port 80. This is convenient for preventing port collisions.

Run the application

Empty this site with

 docker-compose up -d 

Some interesting checks regarding dns mappings for your container:

 docker exec -it sample-site bash ping sample-site 

This ping should work inside your container.

Proxy Build

Nginx Reverse Proxy:

Dockerfile

 FROM nginx RUN rm /etc/nginx/conf.d/* 

We reset the entire configuration of the virtual host, since we are going to configure it.

Docker-compose.yml

 version: "2" networks: mynetwork: external: name: web services: nginx: container_name: nginx-proxy build: . ports: - "80:80" - "443:443" volumes: - ./conf.d/:/etc/nginx/conf.d/:ro - ./sites/:/var/www/ networks: default: {} mynetwork: aliases: - nginx-proxy 

Run proxy

Launch the proxy server using our reliable

 docker-compose up -d 

Assuming no problem, you have two containers that can talk to each other using their names. Let him check it out.

 docker exec -it nginx-proxy bash ping sample-site ping nginx-proxy 

Virtual host setup

The final detail is to configure the shared hosting file so that the proxy server can direct traffic based on what you want to configure:

sample-site.conf for our shared hosting:

  server { listen 80; listen [::]:80; server_name my.domain.com; location / { proxy_pass http://sample-site; } } 

Depending on how the proxy server is configured, you will need this file stored in the local conf.d folder, which we installed through the volumes declaration in the docker-compose file.

Last but not least, tell nginx to reload it.

 docker exec nginx-proxy service nginx reload 

This sequence of steps is the culmination of hours of knocking headaches when I struggled with the painful 502 Bad Gateway error and first studied nginx, since most of my experience was with Apache.

This answer should demonstrate how to kill the 502 Bad Gateway error that occurs because the containers cannot talk to each other.

I hope this answer saves someone there from pain, because for some reason it was hard to find containers to talk to each other, even though I expected this to be an obvious use case. But then again, I'm dumb. And please let me know how I can improve this approach.

+6
May 6 '17 at a.m.
source share

Just found an article from Anand Mani Sankara that shows an easy way to use the nginx upstream proxy server with the docker composer.

Basically, you need to configure instance binding and ports in the docker build file and update upstream in nginx.conf, respectively.

+1
Nov 14 '15 at 0:07
source share



All Articles