Linux kernel plums too big!

Recently, I noticed an increase in the size of core dumps created by my application. Initially, they were only about 5 MB in size and contained about 5 stack frames, and now I have core dumps> 2 GB, and the information contained in them is no different from smaller dumps.

Is there any way to control the size of the generated core dumps? Shouldn't they be at least smaller than the binary application itself?

The binaries are compiled in this way:

  • Compiled in release mode with character debugging (i.e., the -g option of the compiler in NCA).
  • Debug symbols are copied to a separate file and devoid of a binary file.
  • Added a link to GNU debugging symbols to binary.

At the beginning of the application, there is a call to setrlimit , which sets the kernel to infinity. This is problem?

+7
linux coredump
source share
2 answers

Yes - do not allocate so much memory :-)

The kernel dump contains the full image of the address space of your application, including code, stack and heap (malloc'd objects, etc.).

If your core dumps are> 2 GB, it means that at some point you allocated so much memory.

You can use setrlimit to set a lower limit for the size of a core dump, at the risk of ending up with a core dump that cannot be decoded (because it is incomplete).

+11
source share

Yes, setrlimit is why you get large kernel files. You can set a kernel size limit in most shells, for example. in bash you can do ulimit -c 5000000 . However, your call to setrlimit will override this.

/etc/security/limits.conf can also be used to set upper bounds in kernel size.

+1
source share

All Articles