Node http-proxy in Docker container

I have the following code that works fine in my local environment. However, when I try to run the same code from the Docker container (via Boot2Docker), I just can't get to https: // [boot2docker_ip]: 4000

I tried updating the target value in the code below with all these parameters, but none of them seemed to do this trick:

target: 'http://localhost:3000',
target: 'http://0.0.0.0:3000',
target: 'http://127.0.0.1:3000',
target: 'http://<boot2docker_ip>:3000',

 var fs = require('fs'); require('http-proxy').createProxyServer({ ssl: { key: fs.readFileSync(__dirname + '/certs/ssl.key', 'utf8'), cert: fs.readFileSync(__dirname + '/certs/ssl.crt', 'utf8') }, target: 'http://localhost:3000', ws: true, xfwd: true }).listen(4000); 

I am using the node-http-proxy package from https://github.com/nodejitsu/node-http-proxy

Edit

Below is a git repo to try this behavior; I checked in fake SSL for simplicity.

Dockerfile:

 FROM readytalk/nodejs ADD ./src /app ADD ./ssl-proxy /proxy COPY ./run.sh /run.sh RUN chmod +x /run.sh EXPOSE 3000 EXPOSE 4000 ENTRYPOINT ["/run.sh"] 

run.sh:

 #!/bin/sh /nodejs/bin/node /app/main.js; /nodejs/bin/node /proxy/main.js 
+5
source share
1 answer

I just looked at your Dockerfile and especially at the run.sh script that you are using. This line is in your run.sh script:

 /nodejs/bin/node /app/main.js; /nodejs/bin/node /proxy/main.js 

It is important to know that each of these commands starts a long server process, which (theoretically) runs forever. This means that the second process ( /proxy/main.js ) will never start, because the shell will wait for the completion of the first process.

This means that you cannot access your proxy server because it never starts .

Basically, there are two solutions that I could come up with. Please note that the idiomatic Docker method should perform one process only for the container .

  • I would recommend running your application and proxy server in two separate containers . You can bind these two containers together :

     docker run --name app -p 3000 <your-image> /nodejs/bin/node /app/main.js docker run --name proxy -l app:app -p 4000:4000 <your-image> /nodejs/bin/node /proxy/main.js 

    The -l app:app flag will cause the -l app:app container to be accessible with the hostname app in your proxy container (this is done by creating the /etc/hosts entry in the container). This means that inside the proxy container you can use http://app:3000 to access your application port up.

  • An alternative solution would be to use a process manager such as Supervisord to manage several lengthy processes in your container in parallel. There's a good article on that in the documentation. It basically boils down to the following:

    • Install supervisord ( apt-get install supervisor on Ubuntu)
    • Create a configuration file (usually in /etc/supervisor/conf.d/yourapplication.conf ) in which you configure all the services that you need to run:

       [supervisord] nodaemon=true [program:application] command=/nodejs/bin/node /app/main.js [program:proxy] command=/nodejs/bin/node /proxy/main.js 
    • Then use supervisord as your launch command, for example using CMD ["/usr/bin/supervisord"] in your Docker file.

    In this case, both of your processes run in the same container, and you can use http://localhost:3000 to access your upstream application.

+6
source

All Articles