Various Perl scripts (server-specific) invoke the Perl module with many features on the website. EDIT: Scripts use use lib to reference libraries from a folder. During busy periods, scripts (not libraries) become zombies and overload the server.
In the server list:
319 ? Z 0:00 [scriptname1.pl] <defunct> 320 ? Z 0:00 [scriptname2.pl] <defunct> 321 ? Z 0:00 [scriptname3.pl] <defunct>
I have hundreds of copies of each.
EDIT: We do not use fork, system or exec, except for the SSI directive
As far as I know, in this case httpd itself will be the owner of the process. MaxRequestPerChild is set to 0, which should not allow parents to die until the child process completes.
Until now, we believed that temporarily suspending some scenarios helps the server cope with non-existent processes and prevent it from crashing, but zombie processes are still forming without a doubt. Apparently, gbacon seems to be closest to the truth with its theory that the server cannot handle the load.
What can cause httpd to abandon these processes? Is there any best practice to prevent them?
thanks
Answer: Itβs going to Rob. According to him, CGI scripts that generate SSI will not process these SSIs. SSI evaluation occurs before CGI runs in the Apache 1.3 request loop. This has been fixed with Apache 2.0 and later, so that CGI can generate SSI commands.
Since we were running Apache 1.3, for each page view, the SSI turned into non-existent processes. Although the server tried to clean them, it was too busy with working tasks to succeed. As a result, the server crashed and became unresponsive. As a short-term solution, we looked at all the SSIs and moved some processes to the client side to free up server resources and allow time for cleaning. Later we upgraded to Apache 2.2.
perl apache zombie-process
G berdal
source share