Getting stack trace from Perl "Out of memory" error

tl; dr: how to reset perl stack trace when out of memory running Perl httpd

We have a mod_perl 2 server, Perl 5.8.8, RHEL 5.6, Linux 2.6.18.

Very often and unpredictably, the httpd child process begins to use all available memory at an alarming rate. At least we used BSD :: Resource :: setrlimit (RLIMIT_VMEM, ...) so that this process would die from "Out of Memory" before dumping the server.

We do not know where in the code this happens, and it is difficult to reproduce without hourly loads.

We would really like to get a Perl stack trace before the process runs out, so we know what code calls this. Unfortunately, "Out of memory" is an unacceptable error .

Here are the options that I consider, each of which has its drawbacks:

1) Use the $ ^ M backup memory pool . It is required to recompile perl with -DPERL_EMERGENCY_SBRK and -Dusemymalloc.

2) Put tons of log statements, then analyze the logs to see where the process stops.

3) Write an external script that constantly scans the pool of httpd processes, and if he sees that one is using a lot of memory, he sends him a signal USR2 (which we organized to delete the stack trace).

4) - , , " " .

!

+5
2

LD_PRELOAD malloc/free. -. LD_PRELOAD, .so, , malloc, malloc malloc . , efence.

, efence , , ( , OOM). , failmalloc. , , , ( ).

+2

mod_backtrace, . . Backtrace C,

  • Perl, Perl, backtrace
  • gdb, crashy- Perl gdb mod_perl.
+3

All Articles