Is the size of the main file a reflection of memory usage when the application crashes?

My application (C ++ on Sol 10 - 32 bit) crashed, and the size of the kernel created by the application is 4 GB. Can I assume that the application can use up to 4 GB of memory (same as the size of the main file) when it is about to crash? PS. My application is standalone and independent of any other processes.

Is there a way to check the shared memory of the application in use, with the main file?

+4
source share
2 answers

Yes, the main file is a dump of the entire area of ​​virtual memory used by the process when a failure occurs. You cannot have more than 4 GB of the main file with 32-bit processes.

On Solaris, you can use several commands located in / usr / proc / bin to get information from the main file, in particular:

  • File Kernel: Confirm that the main file is in your process.
  • pstack core: will tell you where the process failed
  • pmap core: show you memory usage per address

You can limit the set of data stored in the main file, among other things, using the coreadm command. By default, everything is saved (stack + heap + shm + ism + dism + text + data + rodata + anon + shanon + ctf).

+2
source

From manpage ( http://linux.die.net/man/5/core ):

The action of certain signals by default is to force the process to terminate and create a core dump file, a disk file containing the image of the process memory at the time of termination.

Perhaps your code uses a multi-threaded environment and shared data.

also:

Starting with kernel 2.6.23, the Linux-specific / proc / PID / coredump_filter file can be used to control the memory segments that are written to the kernel dump file in case a kernel dump is executed for the process with the corresponding process identifier.

Perhaps because of this, you can find out the memory used by the application.

+4
source

All Articles