How to analyze heap data from a .hprof file and use it to reduce memory leaks?

Recently, I have found java.lang.OutOfMemoryError exception when starting the application.

During one such case, I managed to get a heap dump using jvisualvm .

I can open the heap dump file .hprof obtained from the heap dump using NetBeans 8.1 , but I don’t know how to analyze the data dump. I would like to know how to read the dump file and take corrective actions to reduce the out-of-memory exception from the point of view of the application.

+6
source share
4 answers

There are many ways to find the root cause of a memory leak, for example, use a profiler, such as JProfiler , and simply apply what is described in this great video . You can also take a look at the Eclipse Memory Analyzer , also known as MAT , which will be able to analyze your heap dump and suggest potential causes of memory leaks, as you can see in this video (more information about the suspicious report here ). Another way is to use Java Flight Recorder by applying this approach . Or using JVisualVM , using the approach described in this video .

+10
source

The tool required for this case is the application:

Memory analysis tool

Just download and run, and then download the hprof file. This may take a minute or two depending on the size of your hprof, but then you will be presented with a good analysis of your memory usage. It is very easy to use and automatically detects potential memory leaks, performs data analysis from different angles.

I use MAT exclusively when dealing with non-trivial memory problems, and I solved all these problems as far as I remember.

+1
source

In most cases, all you need to know is which classes are most to blame for chewing memory. You can use: jmap -histo against a running process, which is convenient if it's a big JVM ... you don’t want to bother with large heap dump files. It must be run as the same Linux user who owns the process, for example, on Linux you can use:

 sudo -u <user> jmap -histo <pid> 

Obviously, histo means histogram. This will output the histogram to stdout, so you probably want to pass it to a file for analysis. It will be sorted by number of instances * instance size, so look at the first 10 records and you may have your own answer.

+1
source

In general, basically, what are you doing, analyzing "what does most RAM use"? then, when you understand this (and "maybe this is the problem that is causing me to run out of RAM?"), then you are trying to understand why there are so many such objects around. They are referenced by what rests on objects, but is not necessary? Or is it by chance holding onto links to ticks, shouldn't it? Do you use too much archiving / paradigm (for example: saving "all in one large array")? Does your database client β€œbuffer” large ResultSets in RAM before returning them? Etc ...

0
source

All Articles