NginX throws an HTTP 499 error after 60 seconds, despite the configuration. (PHP and AWS)

Late last week, I noticed a problem in one of the mid-sized AWS instances where Nginx always returns an HTTP 499 response if the request takes more than 60 seconds. The requested page is a PHP script.

I spent several days trying to find the answers, and tried everything I can find on the Internet, including several entries here in Stack Overflow, nothing works.

I tried changing the PHP settings, the PHP-FPM settings and the Nginx settings. You can see the question that I raised on the NginX forums on Friday ( http://forum.nginx.org/read.php?9,237692 ), although it did not receive an answer, so I hope I could find the answer here before I am forced to return to Apache, which, as I know, just works.

This is not the same problem as the HTTP 500 errors reported in other entries.

I was able to replicate the problem using a new micro AWS NginX instance using PHP 5.4.11.

To help anyone who wants to see the problem in action, I'm going to guide you through the setup that I ran for the last Micro test server.

You will need to start a new instance of AWS Micro (so that it is free) using AMI ami-c1aaabb5

This PasteBin entry is fully customizable to run to reflect my test environment. You just need to change example.com in the NginX configuration at the end

http://pastebin.com/WQX4AqEU

After this setup, you just need to create a sample PHP file that I am testing with

<?php sleep(70); die( 'Hello World' ); ?> 

Save this to webroot and then check. If you run the script from the command line using php or php-cgi, it will work. If you go to the script through a web page and close the access log /var/log/nginx/example.access.log, you will notice that you will receive an HTTP 1.1 499 response in 60 seconds.

Now that you can see the timeout, I will look at some configuration changes that I made for both PHP and NginX to try to get around this. For PHP, I will create some configuration files so that they can be easily disabled

Update PHP FPM configuration to include external configuration files

 sudo echo ' include=/usr/local/php/php-fpm.d/*.conf ' >> /usr/local/php/etc/php-fpm.conf 

Create a new PHP-FPM configuration to override the request timeout

 sudo echo '[www] request_terminate_timeout = 120s request_slowlog_timeout = 60s slowlog = /var/log/php-fpm-slow.log ' > /usr/local/php/php-fpm.d/timeouts.conf 

Change some global settings to provide an emergency restart time of 2 minutes

 # Create a global tweaks sudo echo '[global] error_log = /var/log/php-fpm.log emergency_restart_threshold = 10 emergency_restart_interval = 2m process_control_timeout = 10s ' > /usr/local/php/php-fpm.d/global-tweaks.conf 

Then we will change some parameters of PHP.INI, again using separate files

 # Log PHP Errors sudo echo '[PHP] log_errors = on error_log = /var/log/php.log ' > /usr/local/php/conf.d/errors.ini sudo echo '[PHP] post_max_size=32M upload_max_filesize=32M max_execution_time = 360 default_socket_timeout = 360 mysql.connect_timeout = 360 max_input_time = 360 ' > /usr/local/php/conf.d/filesize.ini 

As you can see, this increases the socket timeout to 3 minutes and helps to log errors.

Finally, I will edit some NginX settings to increase the timeout of that side

First, I edit the /etc/nginx/nginx.conf file and add it to the http fastcgi_read_timeout 300 directive;

Then I edit the / etc / nginx / sites -enabled / example file that we created earlier (see the pastebin entry) and add the following parameters to the server directive

 client_max_body_size 200; client_header_timeout 360; client_body_timeout 360; fastcgi_read_timeout 360; keepalive_timeout 360; proxy_ignore_client_abort on; send_timeout 360; lingering_timeout 360; 

Finally, I will add the following to the ~ .php $ folder of the dir server section

 fastcgi_read_timeout 360; fastcgi_send_timeout 360; fastcgi_connect_timeout 1200; 

Before retrying the script, run nginx and php-fpm to verify that the new settings have been raised. Then I try to access the page and still get the HTTP / 1.1 499 entry in the NginX example.error.log file.

So where am I going wrong? This just works on apache when I set the PHP max runtime to 2 minutes.

I see that the PHP settings have been raised by running phpinfo () on a page accessible on the Internet. I just don’t understand, I really think that it has been increased too much, because for this you just need PHP max_execution_time, default_socket_timeout, as well as NginX fastcgi_read_timeout only in the location-> location directive.

Update 1

After conducting another test to show that the problem is not that the client is dying, I modified the test file as

 <?php file_put_contents('/www/log.log', 'My first data'); sleep(70); file_put_contents('/www/log.log','The sleep has passed'); die('Hello World after sleep'); ?> 

If I run the script from a web page, I can see that the contents of the file will be set to the first line. After 60 seconds, an error appears in the NginX log. After 10 seconds, the contents of the file are changed to the second line, proving that PHP completes the process.

Update 2

Setting fastcgi_ignore_client_abort on; changes the HTTP 499 HTTP response to HTTP 200, but returns nothing to the final client.

Update 3

By installing Apache and PHP (5.3.10) on the field directly (using apt) and then increasing the execution time, the problem also occurs on Apache. The symptoms are the same as they are now, NginX, the HTTP200 response, but the actual client connection time is at hand.

I also began to notice in NginX logs that if I test using Firefox, it makes a double request (for example, a PHP script is executed twice when it takes longer than 60 seconds ). Although this seems to be a client requesting

+43
php amazon-web-services nginx
Mar 25 '13 at 11:09
source share
4 answers

The cause of the problem is elastic load balancing on AWS. By default, they expire after 60 seconds of inactivity, which causes the problem.

Thus, it was not NginX, PHP-FPM or PHP, but load balancing.

To fix this, simply go to the ELB's Description tab, scroll down the page and click the "(Change)" link next to the value indicating "Idle Timeout: 60 seconds"

+54
Mar 25 '13 at 17:37
source

I thought I would leave my two cents. At first the problem is not related to php (it may still be related to php, php always surprises me: P). That's for sure. this is mainly caused by a server proxied for itself, in particular, hostnames / aliases, in your case it could be a load balancer requesting nginx, and nginx will go to the load balancer and keep going that way.

I ran into a problem similar to nginx as load balancing and apache as web server / proxy

+1
Mar 09 '16 at 16:13
source

You need to find where the problems live. I do not know the exact answer, but just try to find it.

We have 3 elements: nginx, php-fpm, php. As you said, the same php settings under apache are fine. Is not it? Have you tried apache instead of nginx on the same OS / host / etc.?

If we see that php does not suspect, then we have two suspects: nginx and php-fpm.

To exclude nginx: try installing the same "system" on ruby. See https://github.com/garex/puppet-module-nginx for an idea of ​​installing a simple ruby ​​installation. Or use google (maybe it will be even better).

My main suspect here is php-fpm.

Try playing with these settings:

  • php-fpm `request_terminate_timeout
  • nginx`s fastcgi_ignore_client_abort
0
Mar 25 '13 at 14:18
source

Actually, I ran into the same problem on the same server, and I realized that after the nginx configuration changes I did not restart the nginx server, so with every hit of nginx-url I got a 499 http response. After restarting nginx, it started working normally with HTTP 200 responses.

0
Oct 18 '16 at 10:18
source



All Articles