Recently I noticed that the nginx error log for a site I was working with was filling up with timeout messages similar to below:
2012/07/06 17:21:01 [error] 23897#0: *8870 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 173.25.1.129, server: project.com, request: “GET /jobs/update HTTP/1.0”, upstream: “fastcgi://127.0.0.1:9000”, host: “project.com”
2012/07/06 17:41:01 [error] 23897#0: *8960 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 173.25.1.129, server: project.com, request: “GET /jobs/update HTTP/1.0”, upstream: “fastcgi://127.0.0.1:9000”, host: “project.com”
The script in question is normally called via a cron job, pulling alot of information from external sources and then storing for later reference to a local database. Loading the location in a browser resulted in the page working for awhile and then returning with a 504 Gateway timed out error page. So things obviously wern’t happening quick enough for the liking of nginx and it was recording the problem as a timeout error in the log file.
Ends up the time nginx will wait for a PHP cgi process can be set in the configuration using the fastcgi_read_timeout parameter. By default if this not defined the nginx process will wait 60 seconds for the script to finish up before timing out. But by editing the configuration file for the affected site and adding fastcgi_read_timeout parameter in the PHP handler section I able to avoid any future timeout errors from the script.
Below is my amended handler section with the timeout set to 5 minutes.
[codesyntax lang="bash"]
location ~ .php$ { client_max_body_size 400m; set $php_root /home/sites/mysite/www/public; fastcgi_read_timeout 300; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $php_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; }
[/codesyntax]