Nginx: broken_header with proxy_protocol and ELB

I am trying to configure proxy_protocol in my nginx configuration. My server is behind AWS Load Balancer (ELB), and I have enabled the proxy protocol for both ports 80 and 443.

However, this is what I get when I get to my server:

broken header: "  /   '   \DW Vc A{      @  kj98   =5    g@32ED  </A " while reading PROXY protocol, client: 172.31.12.223, server: 0.0.0.0:443 

This is a direct copy to copy from the nginx error log - odd characters and all.

Here is a snippet from my nginx configuration:

 server { listen 80 proxy_protocol; set_real_ip_from 172.31.0.0/20; # Coming from ELB real_ip_header proxy_protocol; return 301 https://$http_host$request_uri; } server { listen 443 ssl proxy_protocol; server_name *.....com ssl_certificate /etc/ssl/<....>; ssl_certificate_key /etc/ssl/<....?; ssl_prefer_server_ciphers On; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!DSS:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4; ssl_session_cache shared:SSL:10m; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; ssl_stapling on; ssl_stapling_verify on; ... 

I can not find help on this issue. Other people had problems with the header, but the error with bad headers is always readable - they do not look as if they are encoded, as for me.

Any ideas?

+11
amazon-web-services amazon-elb nginx
source share
4 answers

Two suggestions:

  • Make sure your ELB listener is configured to use TCP as the protocol, not HTTP. I have an LB configuration, for example, the following: for Nginx with proxy_protocol setting:

     { "LoadBalancerName": "my-lb", "Listeners": [ { "Protocol": "TCP", "LoadBalancerPort": 80, "InstanceProtocol": "TCP", "InstancePort": 80 } ], "AvailabilityZones": [ "us-east-1a", "us-east-1b", "us-east-1d", "us-east-1e" ], "SecurityGroups": [ "sg-mysg" ] } 
  • You mentioned that you enabled the proxy protocol in ELB, so I assume that you followed the steps for configuring AWS . If so, then the ELB should correctly process the HTTP request with the first line, like something like PROXY TCP4 198.51.100.22 203.0.113.7 35646 80\r\n . However, if the HTTP request is not included in Nginx with the PROXY ... line PROXY ... , this may cause the problem you are seeing. You can reproduce that if you press the EC2 DNS name directly in the browser or you make ssh into an EC2 instance and try something like curl localhost , then you should see a similar broken header error in Nginx logs.

To find out if it works with a well-formed HTTP request, you can use telnet:

  $ telnet localhost 80 PROXY TCP4 198.51.100.22 203.0.113.7 35646 80 GET /index.html HTTP/1.1 Host: your-nginx-config-server_name Connection: Keep-Alive 

Then check the Nginx logs and see if you have the same broken header error. If not, the ELB will most likely not send a correctly formatted PROXY request, and I would suggest reconfiguring the ELB Proxy protocol, possibly with a new LB, to make sure it is configured correctly.

+11
source share

Stephen Karger's solution above is the right one, you have to configure it to configure ELB to support proxies. Here are the AWS docs for doing just that: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html . The docs are a bit complicated at first, so if you want, you can simply go to steps 3 and 4 in the Enable Proxy Protocol Using the AWS CLI section. These are the only necessary steps to enable the proxy channel. In addition, as Stephen suggested, you should make sure that your ELB uses TCP instead of http or https , since both of them will not work correctly with the ELB proxy implementation. I suggest moving your socket channel away from common ports such as 80 and 443, so that you can still support these standardized connections with their default behavior. Of course, this call depends entirely on how your application stack looks.

If this helps, you can use the npm wscat package to debug your website connections as follows:

 $ npm install -g wscat $ wscat --connect 127.0.0.1 

If the connection works locally, then this is probably your load balancer. However, if this does not happen, the problem with your socket host is almost certain.

In addition, a tool like nmap can help you open open ports. Good checklist for debugging:

 npm install -g wscat # can you connect to it from within the server? ssh ubuntu@69.69.69.69 wscat -c 127.0.0.1:80 # can you connect to it from outside the server? exit wscat -c 69.69.69.69:80 # if not, is your socket port open for business? nmap 69.69.69.69:80 

You can also use nmap from your server to detect open ports. install nmap on ubuntu, just sudo apt-get install nmap . on osx, brew install nmap

Here is the working configurator that I have, although at the moment it does not support ssl. In this configuration, I have port 80 loading the application for rails, port 81 feeds the socket through my elbow, and port 82 is open for internal socket connections. Hope this helps someone! Anyone who uses Rails, Unicorn and Faye for deployment should find this useful. :) happy hack!

 # sets up deployed ruby on rails server upstream unicorn { server unix:/path/to/unicorn/unicorn.sock fail_timeout=0; } # sets up Faye socket upstream rack_upstream { server 127.0.0.1:9292; } # sets port 80 to proxy to rails app server { listen 80 default_server; keepalive_timeout 300; client_max_body_size 4G; root /path/to/rails/public; try_files $uri/index.html $uri.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded_Proto $scheme; proxy_redirect off; proxy_pass http://unicorn; proxy_read_timeout 300s; proxy_send_timeout 300s; } error_page 500 502 503 504 /500.html; location = /500.html { root /path/to/rails/public; } } # open 81 to load balancers (external socket connection) server { listen 81 proxy_protocol; server_name _; charset UTF-8; location / { proxy_pass http://rack_upstream; proxy_redirect off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } # open 82 to internal network (internal socket connections) server { listen 82; server_name _; charset UTF-8; location / { proxy_pass http://rack_upstream; proxy_redirect off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } 
0
source share

I had this error and came across this ticket:

which ultimately led me to find out that in my nginx.conf file I had an unnecessary proxy_protocol . I deleted it and everything worked again.

Oddly enough, everything worked fine with nginx version 1.8.0 , but when I updated nginx version 1.8.1 , this is when I started to see the error.

0
source share

I also had a problem with unreadable headers, and here is the reason and how I fixed it.

In my case, Nginx is configured correctly with use-proxy-protocol=true . He complains about a broken header solely because AWS ELB did not add the required header at all (e.g. PROXY TCP4 198.51.100.22 203.0.113.7 35646 80 ). Nginx sees the encrypted HTTPS payload directly. That is why it prints all unreadable characters.

So why didn't AWS ELB add a PROXY header? It turned out that I used the wrong ports in the commands to enable the proxy protocol policy. Instance ports should be used instead of 80 and 443.

ELB has the following port mapping.

 80 -> 30440 443 -> 31772 

Teams must be

 aws elb set-load-balancer-policies-for-backend-server \ --load-balancer-name a19235ee9945011e9ac720a6c9a49806 \ --instance-port 30440 \ --policy-names ars-ProxyProtocol-policy aws elb set-load-balancer-policies-for-backend-server \ --load-balancer-name a19235ee9945011e9ac720a6c9a49806 \ --instance-port 31272 \ --policy-names ars-ProxyProtocol-policy 

but I used 80 and 443 by mistake.

Hope this helps someone.

0
source share

All Articles