Unable to disable encoded transfer coding in nginx with gzip for static resources served with Node backend

We have a Node/express web application that serves static resources in addition to normal content via express.static() . In front of it is the nginx server, which is currently configured to gzip these static resource requests if it has a user agent .

However, although nginx does gzip as expected, it drops the Content-Length header from the origin and sets Transfer-Encoding: chunked instead. This interrupts caching on our CDN .

Below are the answers to a typical request for a static asset (in this case a JS file), from the backend node and nginx :

Request

 curl -s -D - 'http://my_node_app/res/my_js.js' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Connection: keep-alive' --compressed -o /dev/null 

Node response headers :

 HTTP/1.1 200 OK Accept-Ranges: bytes Date: Wed, 07 Jan 2015 02:24:55 GMT Cache-Control: public, max-age=0 Last-Modified: Wed, 07 Jan 2015 01:12:05 GMT Content-Type: application/javascript Content-Length: 37386 // <--- The expected header Connection: keep-alive 

Response headers from nginx :

 HTTP/1.1 200 OK Server: nginx Date: Wed, 07 Jan 2015 02:24:55 GMT Content-Type: application/javascript Transfer-Encoding: chunked // <--- The problematic header Connection: keep-alive Vary: Accept-Encoding Cache-Control: public, max-age=0 Last-Modified: Wed, 07 Jan 2015 01:12:05 GMT Content-Encoding: gzip 

Our current nginx configuration for static location assets is as follows:

nginx config :

 # cache file paths that start with /res/ location /res/ { limit_except GET HEAD { } # http://nginx.com/resources/admin-guide/caching/ # http://nginx.org/en/docs/http/ngx_http_proxy_module.html proxy_buffers 8 128k; #proxy_buffer_size 256k; #proxy_busy_buffers_size 256k; # The cache depends on proxy buffers, and will not work if proxy_buffering is set to off. proxy_buffering on; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_connect_timeout 2s; proxy_read_timeout 5s; proxy_pass http://node_backend; chunked_transfer_encoding off; proxy_cache my_app; proxy_cache_valid 15m; proxy_cache_key $uri$is_args$args; } 

As you can see from the above configuration, although we explicitly set chunked_transfer_encoding off for paths such as in nginx docs, we have proxy_buffering on and a large enough proxy_buffers size , the response is still blocked.

What are we missing here?

- Edit 1: version information -

 $ nginx -v nginx version: nginx/1.6.1 $ node -v v0.10.30 

- Edit 2: nginx gzip config -

 # http://nginx.org/en/docs/http/ngx_http_gzip_module.html gzip on; gzip_buffers 32 4k; gzip_comp_level 1; gzip_min_length 1000; #gzip_http_version 1.0; gzip_types application/javascript text/css gzip_proxied any; gzip_vary on; 
+7
source share
1 answer

You are right, let me clarify.

Headings are the first thing to send. However, since you are using streaming compression, the final size is undefined. You know only the size of the uncompressed resource, and also too much content is too fast.

Thus, there are two options:

  • chunked encoding transmission
  • Compressing the asset completely before sending any data, so the compressed size is known

You are currently experiencing the first case, and it seems that you really need the second. The easiest way to get the second case is to enable gzip_static, as @kodeninja said in the comments.

+1
source share

All Articles