How does FastCGI work on a web server (e.g. Apache 2.2+)?

I looked at the sources of FastCGI (fcgi-2.4.0), and actually there is no fork sign there. If I'm right, the web server uses the process for the FastCGI module (compiled into it or loaded as SO / DLL) and processes its socket mainly (usually TCP port: 80).

On * nix, the FastCGI module "locks" this socket using the file write lock (libfcgi / os_unix.c: 989) for the entire file descriptor (indeed, for listening); thus, when new connections access only the FastCGI module, they can handle them. Blocking an incoming socket is released immediately before sending an HTTP request.

Since the FastCGI module is not a multiprocess / thread (there is no internal use of fork / pthread_create), I assume that the simultaneous processing of several simultaneous connections is achieved by merging from the web server (via OS_SpawnChild) from the processes of the FastCGI module. If we create, for example, 3 FastCGI processes (Apache calls 3 x OS_SpawnChild), does this mean that we could only execute 3 requests at a time?

A) Is my vision of the correct operation of FastCGI?

B) If the cost of an OS to create a new process / create a connection to a local database can be considered insignificant, what are the advantages of FastCGI with respect to the old-fashioned executable approach?

Thank you Ema !:-)

+7
c ++ c apache fastcgi
source share
4 answers

The acceleration of speed from FastCGI compared to conventional CGI is that the processes are constant. for example, if you have any database descriptors to open, you can do them once. The same goes for any caching.

The main gain comes from the need to create a new php / perl / etc. translator every time that takes an amazing amount of time.

If you want to have several simultaneous connections, you need to run several FastCGI processes. FastCGI is not a way to handle more connections through any special concurrency. This is a way to speed up individual requests, which in turn will allow you to process more requests. But yes, you're right, more concurrent requests require more processes.

+5
source share

The processes created by FastCGI are constant, they are not killed after processing the request, instead they are "combined."

+4
source share

B, yes, if the cost of spawning is zero, then the outdated CGI will be pretty good. So if you don’t have a lot of hits, a simple old CGI is fine, run it. The cgi fast point does things that benefit from the many persistent repositories or structures that need to be built BEFORE you can do your job, for example, run queries against large databases where you want to keep the database libraries in memory instead reloading the whole shebang every time you want to run a request.

It matters when you have many MANY.

+2
source share

Really,

since the visible (A) is fine, now what about (B)? If I'm talking about executables (correctly compiled C / C ++ programs, and not in scripts like perl / php / ...), and if we look at the cost of the spun process and the cost of connecting to a new DB, then this approach ( FastCGI) will it just be something like a small gain compared to regular CGI executables?

I mean, considering that Linux develops (forces) the process very quickly, and if the database works locally (for example, the same MySQL node), the time required to launch a new executable file and connect to the database is practically 0. case, without any interpretation, only Apache C / C ++ modules will be faster than this.

Using the FastCGI approach, you are even more vulnerable to memory leaks, because the process does not fork / restart every time ... At the moment, if you need to develop CGI in C / C ++, it is better to use the original CGI and / or Apache C modules directly / C ++?

Again, I'm not talking about scripts (perl / php / ...), I'm talking about compiled CGI.

Thanks again, Hi, Ema! :-)

+1
source share

All Articles