How can I track JVM memory leak without a heap in Jboss AS 5.1?

After switching to JBoss AS 5.1, running on JRE 1.6_17, CentOS 5 Linux, the JRE process ends in memory after about 8 hours (3G max views in a 32-bit system). This occurs on both servers in the cluster at moderate load. Explicit use of the Java heap, but the overall volume of the JVM continues to grow. The number of threads is very stable and peaks at 370 threads with a thread stack size set at 128 K.

The JVM trail reaches 3G, then it dies with:

java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool :: allocate. Out of swap space?

  Internal Error (allocation.cpp: 117), pid = 8443, tid = 1667668880
  Error: ChunkPool :: allocate

Current JVM memory arguments:

-Xms1024m -Xmx1024m -XX: MaxPermSize = 256m -XX: ThreadStackSize = 128

Given these settings, I expect that the size of the processed area will be approximately 1.5G. Instead, it simply continues to grow until it reaches 3G.

It seems that not one of the standard Java memory tools can tell me that all this memory is on the native side of the JVM. (Eclipse MAT, jmap, etc.). Pmap on PID just gives me a ton of [anon] allocations that really don't help much. This memory problem occurs if I don't have any JNI or java.nio classes loaded, as far as I can tell.

How can I troubleshoot from the inside / inside of the JVM to find out where all the memory goes without a heap?

Thanks! My ideas quickly run out and restarting application servers every 8 hours will not be a very good solution.

+5
3

@Thorbjørn, .

, 64- ​​ JVM.

0

Walton: , / https://community.jboss.org/thread/152698. -Djboss.vfs.forceCopy = false java, , . WARN: , , .

0

Join Jvisualvm in the JDK to get an idea of ​​what's going on. jvisualvm can join a running process.

-1
source

All Articles