Multi-threaded FastCGI Application

I want to write a FastCGI application that should handle multiple simultaneous requests using threads. I looked at the threaded.c sample that comes with the SDK:

#define THREAD_COUNT 20 static int counts[THREAD_COUNT]; static void *doit(void *a) { int rc, i, thread_id = (int)a; pid_t pid = getpid(); FCGX_Request request; char *server_name; FCGX_InitRequest(&request, 0, 0); for (;;) { static pthread_mutex_t accept_mutex = PTHREAD_MUTEX_INITIALIZER; static pthread_mutex_t counts_mutex = PTHREAD_MUTEX_INITIALIZER; /* Some platforms require accept() serialization, some don't.. */ pthread_mutex_lock(&accept_mutex); rc = FCGX_Accept_r(&request); pthread_mutex_unlock(&accept_mutex); if (rc < 0) break; server_name = FCGX_GetParam("SERVER_NAME", request.envp); FCGX_FPrintF(request.out,… … FCGX_Finish_r(&request); } return NULL; } int main(void) { int i; pthread_t id[THREAD_COUNT]; FCGX_Init(); for (i = 1; i < THREAD_COUNT; i++) pthread_create(&id[i], NULL, doit, (void*)i); doit(0); return 0; } 

The FastCGI Specification explains how the web server will determine how many connections the FastCGI application supports:

The web server may request specific variables in the application. the server typically executes a request at application startup to automate some aspects of the configuration system.

...

β€’ FCGI_MAX_CONNS: the maximum number of simultaneous transport connections this application will accept, for example. "1" or "10".

β€’ FCGI_MAX_REQS: the application will accept the maximum number of simultaneous requests, for example. "1" or "50".

β€’ FCGI_MPXS_CONNS: "0" if this application does not multiplex connections (that is, simultaneous processing of requests for each connection), "1" otherwise.

But the return values ​​for this request are hardcoded in the FastCGI SDK and return 1 for FCGI_MAX_CONNS and FCGI_MAX_REQS and 0 for FCGI_MPXS_CONNS. Thus, the threaded.c example will never get multiple connections.

I tested the sample using lighttpd and nginx, and the application processed only one request at a time. How can I get my application to handle multiple requests? Or is this the wrong approach?

+7
source share
3 answers

The threaded.c program was tested using http_load. The program works for nginx. There is only one instance of the program. If the requests will be serviced sequentially, I would expect it to take 40 seconds for 20 requests, even if they are sent in parallel. Here are the results (I used the same numbers as Andrew Bradford - 20, 21 and 40) -

20 Requests, 20 in parallel, took 2 seconds -

 $ http_load -parallel 20 -fetches 20 request.txt 20 fetches, 20 max parallel, 6830 bytes, in 2.0026 seconds 341.5 mean bytes/connection 9.98701 fetches/sec, 3410.56 bytes/sec msecs/connect: 0.158 mean, 0.256 max, 0.093 min msecs/first-response: 2001.5 mean, 2002.12 max, 2000.98 min HTTP response codes: code 200 -- 20 

21 Requests, 20 in parallel, took 4 seconds -

 $ http_load -parallel 20 -fetches 21 request.txt 21 fetches, 20 max parallel, 7171 bytes, in 4.00267 seconds 341.476 mean bytes/connection 5.2465 fetches/sec, 1791.55 bytes/sec msecs/connect: 0.253714 mean, 0.366 max, 0.145 min msecs/first-response: 2001.51 mean, 2002.26 max, 2000.86 min HTTP response codes: code 200 -- 21 

40 requests, 20 in parallel, took 4 seconds -

 $ http_load -parallel 20 -fetches 40 request.txt 40 fetches, 20 max parallel, 13660 bytes, in 4.00508 seconds 341.5 mean bytes/connection 9.98732 fetches/sec, 3410.67 bytes/sec msecs/connect: 0.159975 mean, 0.28 max, 0.079 min msecs/first-response: 2001.86 mean, 2002.62 max, 2000.95 min HTTP response codes: code 200 -- 40 

Thus, this proves that even if the values ​​FCGI_MAX_CONNS, FCGI_MAX_REQS and FCGI_MPXS_CONNS are hardcoded, requests are sent in parallel.

When Nginx receives several requests, it puts them all in the FCGI application queue back in the opposite direction. It does not wait for a response from the first request before sending the second request. In the FCGI application, when a thread serves the first request at any time, the other thread does not wait for the first to complete, it will pick up the second request and start working on it. And so on.

So, the only time you lose is the time it takes to read the request from the queue. This time is usually negligible compared to the time it takes to process the request.

+6
source

There is no answer to this question, since it depends not only on the FastCGI protocol, but, first of all, depends on the FastCGI process manager used. For Apache2 web servers, the FastCGI process manager can usually be mod_fastcgi or mod_fcgid . Both of them behave differently. mod_fastcgi seems to be multithreaded, and sends concurrent requests to the FastCGI server, which has announced that it itself supports it. mod_fcgid for now (may change in the future?), and not multithreaded, and will always generate a new FastCGI server process with a simultaneous request and will never send parallel requests to the FastCGI server.

All that can be said: yes, FastCGI provides multi-threaded FastCGI servers, but the environment in which the FastCGI server works should also make this function a reality ... in practice, it may or may not, and, unfortunately, mod_fcgid , at least , not yet.

If your FastCGI SDK was from mod_fcgid , this may be the reason that the response to the FCGI_MAX_CONNS control FCGI_MAX_CONNS always returns a fixed, hard coded value of 1 .

You may be interested in my recent question in two other web links that all three mention a specific topic of a FastCGI multithreaded server:

+2
source

I think you can test in such a way that you limit single-threaded. I faced a similar situation using libfcgi and lighttpd, but decided that if I used Firefox for testing, Firefox could artificially restrict sending an HTTP request to the server until the previous one to the same server completes. The tool you use for testing can do something similar.

You do not need to change FCGI_MAX_CONNS , FCGI_MAX_REQS or FGCI_MPXS_CONNS . Hard-coded values ​​should not matter for modern web servers such as nginx or lighttpd.

Using a command-line tool such as curl and spawning 20 curling processes at once for all strokes on the server leads to activation of all 20 threads and all 20 curling processes terminate at the same time after 2 seconds when launched against the threaded.c example provided by the SDK (which has an explicit call to sleep(2) ).

I have my lighttpd configuration, for example:

 fastcgi.server = ( "/test" => ( "test.fastcgi.handler" => ( "socket" => "/tmp/test.fastscgi.socket", "check-local" => "disable", "bin-path" => "/tmp/a.fastcgi", "max-procs" => 1, ) ) ) 

max-procs set to 1 will only spawn one instance of your fcgi program, and lighttpd should report an increase in socket load as requests arrive before the previous request is executed.

If you create 21 curling processes, the first 20 should end after 2 seconds, and then the last should end after 2 seconds. Spawning of 40 twisting processes should take almost the same duration as 21 (totaling more than 4 seconds).

+1
source

All Articles