Why is Direct ByteBuffer constantly growing on the HornetQ server leading to OOM?

Configuration

I installed the standalone HornetQ cluster (2.4.7-Final) on Ubuntu 12.04.3 LTS (GNU / Linux 3.8.0-29-generic x86_64). The instance has 16 GB of RAM with 2 cores, and I assigned -Xms5G -Xmx10G to the JVM.

The following is the address setting in the HornetQ configuration:

<address-settings> <address-setting match="jms.queue.pollingQueue"> <dead-letter-address>jms.queue.DLQ</dead-letter-address> <expiry-address>jms.queue.ExpiryQueue</expiry-address> <redelivery-delay>86400000</redelivery-delay> <max-delivery-attempts>10</max-delivery-attempts> <max-size-bytes>1048576000</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> <message-counter-history-day-limit>10</message-counter-history-day-limit> </address-setting> <address-setting match="jms.queue.offerQueue"> <dead-letter-address>jms.queue.DLQ</dead-letter-address> <expiry-address>jms.queue.ExpiryQueue</expiry-address> <redelivery-delay>3600000</redelivery-delay> <max-delivery-attempts>25</max-delivery-attempts> <max-size-bytes>1048576000</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> <message-counter-history-day-limit>10</message-counter-history-day-limit> </address-setting> <address-setting match="jms.queue.smsQueue"> <dead-letter-address>jms.queue.DLQ</dead-letter-address> <expiry-address>jms.queue.ExpiryQueue</expiry-address> <redelivery-delay>3600000</redelivery-delay> <max-delivery-attempts>25</max-delivery-attempts> <max-size-bytes>1048576000</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> <message-counter-history-day-limit>10</message-counter-history-day-limit> </address-setting> <!--default for catch all--> <!-- delay redelivery of messages for 1hr --> <address-setting match="#"> <dead-letter-address>jms.queue.DLQ</dead-letter-address> <expiry-address>jms.queue.ExpiryQueue</expiry-address> <redelivery-delay>3600000</redelivery-delay> <max-delivery-attempts>25</max-delivery-attempts> <max-size-bytes>1048576000</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> <message-counter-history-day-limit>10</message-counter-history-day-limit> </address-setting> </address-settings> 

There are 10 more queues associated with the default address indicated by a wildcard.

Problem

Over time, Direct ByteBuffer memory gradually increases and even takes up swap space, eventually throwing an OutOfMemoryError ("Direct Buffer Memory").

I tried a lot of JVM and JMS settings, but in vain. Even the -XX: MaxDirectMemorySize = 4G job for the JVM led to early OOME for the same reason. It seems that ByteBuffer is not readable, or the GC does not require memory without references.

Has anyone come across the same question before?

Any suggestions are welcome and in advance in advance.

+6
source share
1 answer

I don't know anything about the internal components of hornetq, so this answer only covers DBB:

  • its usual leak, DBB objects are just still available and therefore not freed. This may be due to either an error or improper use of the application.
    The usual approach here is to take a bunch of heaps and determine what keeps the objects alive.

  • buffers become unavailable, but the garbage collector runs the old collection of generators so rarely that it takes a long time until they are collected and their own memory is freed. If the server is running with -XX:+DisableExplicitGC , which also suppresses the Full GC attempt when the MaxDirectMemorySize limit is MaxDirectMemorySize .
    Configuring the GC to run more frequently to ensure timely release of the DBB can solve this.

+2
source

All Articles