Nginx 499 Error Codes

I get a lot of 499 nginx error codes. I see that this is a problem with the client side. This is not a problem with Nginx or the uWSGI stack. I note correlation in uWSGI logs when you get 499.

address space usage: 383692800 bytes/365MB} {rss usage: 167038976 bytes/159MB} [pid: 16614|app: 0|req: 74184/222373] 74.125.191.16 () {36 vars in 481 bytes} [Fri Oct 19 10:07:07 2012] POST /bidder/ => generated 0 bytes in 8 msecs (HTTP/1.1 200) 1 headers in 59 bytes (1 switches on core 1760) SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /bidder/ (ip 74.125.xxx.xxx) !!! Fri Oct 19 10:07:07 2012 - write(): Broken pipe [proto/uwsgi.c line 143] during POST /bidder/ (74.125.xxx.xxx) IOError: write error 

I am looking for a more detailed explanation and hope this is nothing wrong with my nginx configuration for uwsgi. I take it at face value ... that's not a problem. The problem with the client.

thank

+89
nginx uwsgi
Oct 19 '12 at 11:28
source share
11 answers

HTTP 499 in Nginx means that the client closed the connection before the server responds to the request. In my experience, there is usually a wait time on the client side . As I know, this is a specific Nginx error code.

+137
Aug 23 '13 at 20:02
source share

In my case, I was impatient and ended up misinterpreting the magazine.

In fact, the real problem was the connection between nginx and uwsgi, and not between the browser and nginx. If I downloaded the site into my browser and waited a long time, I would get "504 - Bad Gateway". But it took so long that I kept trying things and then updated in the browser. Therefore, I never waited long enough to see error 504. When updating in a browser, that is, when the previous request is closed, and Nginx writes this to the log as 499.

development

Here I will assume that the reader knows how I, how I, when I started playing.

My installation was a reverse proxy, nginx server and application server, uWSGI server. All the request from the client will be sent to the nginx server, and then redirected to the uWSGI server, and then the response is sent back as well. I think everyone is using nginx / uwsgi and should use it.

My nginx worked as it should, but something was wrong with the uwsgi server. There are two ways (possibly more) in which the uwsgi server may not respond to the nginx server.

1) uWSGI says: "I am processing, just wait and you will get an answer soon." nginx has a certain period of time that it is ready to wait, fx 20 seconds. After that, he will answer the client with a 504 error.

2) uWSGI is dead, or uWSGi is dying while nginx is waiting for it. nginx sees this immediately, in which case it returns error 499.

I tested my setup by making requests in the client (browser). Nothing happened in the browser, it just kept hanging. After 10 seconds (less than a timeout), I came to the conclusion that something was wrong (which was true) and closed the uWSGI server from the command line. Then I go to the uWSGI settings, try something new, and then restart the uWSGI server. At that moment, when I closed the uWSGI server, the nginx server will return error 499.

Therefore, I continued to debug using 499 erroe, which means a Google search for error 499. But if I waited long enough, I would get error 504. If I got error 504, I would better understand the problem and then be able to debug.

So the conclusion is that the problem was that uWGSI, which kept drooping ("Wait a bit, a little longer, then I will have an answer for you ...").

How I fixed this problem, I do not remember. I think this can be caused by many things.

+61
Dec 07 '14 at 22:52
source share

The client closed the connection does not mean that the problem is with the browser !? Not at all!

You can find 499 errors in the log file if you have LB (load balancing) in front of your web server (nginx) either AWS or haproxy (custom). However, LB will act as a client for nginx.

If you use the default values ​​for haproxy for:

  timeout client 60000 timeout server 60000 

This would mean that LB will time out after 60,000 ms if there is no response from nginx. Timeouts can occur for busy websites or scripts that require more time to execute. You will need to find a timeout that will work for you. For example, expand it to:

  timeout client 180s timeout server 180s 

And you are likely to be installed.

Depending on your installation, a 504 gateway timeout error may be detected in your browser that indicates that something is wrong with php-fpm, but this does not apply to 499 errors in your log files.

+18
May 17 '17 at 14:49
source share

In my case, I received 499 when the client API closed the connection before it received any response. Literally sent a POST and immediately closed the connection. This is solved using the option:

proxy_ignore_client_abort on

Nginx doc

+7
Oct. 25 '18 at 10:45
source share

As you pointed out 499 connection termination is registered by nginx. But this usually happens when your backend server is too slow and other proxy timeouts first or user software terminates the connection. Therefore, check if uWSGI responds quickly or not, is there any load on the uWSGI / Database server.

In many cases, there are some other proxies between the user and nginx. Some of them may be located in your infrastructure, for example, CDN, Load Balacer, Varnish cache, etc. Others may be on the user side, for example, a caching proxy, etc.

If you have proxies on your side, such as LoadBalancer / CDN ... you should set timeouts for the timeout first of your backend and gradually for other proxies for the user.

If you have:

 user >>> CDN >>> Load Balancer >>> Nginx >>> uWSGI 

I recommend you install:

  • n seconds to uSSGI timeout
  • n+1 second before nginx timeout
  • 'n + 2' - timeout for load balancing
  • n+3 seconds timeout in CDN.

If you cannot set some timeouts (e.g. CDN), find their timeout and configure the rest according to it ( n , n-1 ...).

This provides the correct timeout chain. and you really find whose timeout is and return the correct response code to the user.

+4
Jun 23 '19 at 8:34
source share

This error is quite easy to reproduce using the standard nginx configuration with php-fpm.

Holding down the F5 button on the page will create dozens of update requests on the server. Each previous request is canceled by the browser on a new update. In my case, I found dozens of 499 in the log file of a client online store. From the nginx point of view: if the response was not delivered to the client before the next update request, nginx will report error 499.

 mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:32 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:33 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:33 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:33 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:33 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:34 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:34 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:34 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:34 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:35 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:35 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string) 

If php-fpm processing takes longer (e.g. a heavy WP page), this can cause problems, of course. For example, I heard about php-fpm crashes, but I believe that they can be prevented by configuring the services properly, like calling xmlrpc.php.

+3
Jun 19 '18 at 8:06
source share

... came here from google search

I found the answer elsewhere here - > stack overflow

which was supposed to increase the idle connection timeout of my AWS elastic load balancer!

(I set up a Django site with a nginx / apache reverse proxy, and really really did work with the backend log in timeout)

+2
Mar 15 '16 at 5:33
source share

As soon as I received 499 β€œThe request was denied by antivirus” as an HTTP AJAX response (false positive from Kaspersky Internet Security with easy heuristic analysis, the deep heuristic analysis correctly knew that there was nothing wrong).

0
Jun 16 '15 at 8:09
source share

One reason for this behavior might be to use http for uwsgi instead of socket . Use the command below if you are using uwsgi directly.

  uwsgi --socket :8080 --module app-name.wsgi 

Same command in .ini file

  chdir = /path/to/app/folder socket = :8080 module = app-name.wsgi 
0
Jun 13 '16 at 19:17
source share

I ran into this problem and the reason was related to the Kaspersky Protection plugin in the browser. If you encounter this, try disabling your plugins and see if this fixes your problem.

0
Sep 18 '16 at 18:12
source share

Many cases cause error 499, one of my arguments is the Content-Length field, omitted in the http request from the pocco client

-four
Jun 01 '15 at 8:18
source share



All Articles