I use JProfiler to parse a piece of Java code that calls native C code through JNI, and I get weird results from the Processor Views window. In particular, the information on the Call Tree tab tells me that the Java method, which calls its own code, consumes the largest proportion of the execution time, but the Hot Spots tab does not even list this method at all. I also noticed a similar story for org.joda.time classes, which reportedly have a fairly large processor share but are not reported as “hot spots”, and I wonder if this happens because they spend a lot of time invoking code conversion of the original date.
Any insight into this problem would be appreciated.
EDIT: I just discovered a very troubling academic article called “Assessing the Accuracy of Java Profiles” (I would provide a link, but it looks like the University of Colorado server referenced by the Google result for this file is very unhappy right now, so I had to pull out copy from Google’s Quick View link). I suspect that the problem with our own methods is that they are reread, because there are many, they are short, and the call is likely to lead to a yield limit; however, I am not sure that the same applies to time conversion procedures. Please note that I get significantly different results when using instrumental profiling and sampling profiling for the same test run, and the results with the tools are more consistent with my intuition. I recommend paper to anyone who has ever found that they scratch their heads based on profiling results. Still hoping that someone will have more information about this; "Profiler is wrong" - not a very pleasant result.
EDIT 2: Looks like colorado.edu figured out, here's the link: http://www-plan.cs.colorado.edu/klipto/mytkowicz-pldi10.pdf
java jprofiler
Bd at ivenhill
source share