After switching to JBoss AS 5.1, running on JRE 1.6_17, CentOS 5 Linux, the JRE process ends in memory after about 8 hours (3G max views in a 32-bit system). This occurs on both servers in the cluster at moderate load. Explicit use of the Java heap, but the overall volume of the JVM continues to grow. The number of threads is very stable and peaks at 370 threads with a thread stack size set at 128 K.
The JVM trail reaches 3G, then it dies with:
java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool :: allocate. Out of swap space?
Internal Error (allocation.cpp: 117), pid = 8443, tid = 1667668880
Error: ChunkPool :: allocate
Current JVM memory arguments:
-Xms1024m -Xmx1024m -XX: MaxPermSize = 256m -XX: ThreadStackSize = 128
Given these settings, I expect that the size of the processed area will be approximately 1.5G. Instead, it simply continues to grow until it reaches 3G.
It seems that not one of the standard Java memory tools can tell me that all this memory is on the native side of the JVM. (Eclipse MAT, jmap, etc.). Pmap on PID just gives me a ton of [anon] allocations that really don't help much. This memory problem occurs if I don't have any JNI or java.nio classes loaded, as far as I can tell.
How can I troubleshoot from the inside / inside of the JVM to find out where all the memory goes without a heap?
Thanks! My ideas quickly run out and restarting application servers every 8 hours will not be a very good solution.