Activemq oom after turning on the top

After enabling the STOMP protocol (before only the default protocol was enabled) on the Activemq server, it started to fail using oom. I have only 1 client using STOMP. It can work for 1 week without failures or failure a day after a restart. Here is the configuration file:

<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd"> <bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery" lazy-init="false" scope="singleton" init-method="start" destroy-method="stop"> </bean> <!-- The <broker> element is used to configure the ActiveMQ broker. --> <broker useJmx="true" xmlns="http://activemq.apache.org/schema/core" brokerName="cms-mq" dataDirectory="${activemq.data}"> <destinationInterceptors> <virtualDestinationInterceptor> <virtualDestinations> <virtualTopic name="VirtualTopic.>" selectorAware="true"/> </virtualDestinations> </virtualDestinationInterceptor> </destinationInterceptors> <destinationPolicy> <policyMap> <policyEntries> <policyEntry topic=">" producerFlowControl="false"> </policyEntry> <policyEntry queue=">" producerFlowControl="false"> </policyEntry> </policyEntries> </policyMap> </destinationPolicy> <managementContext> <managementContext createConnector="false"/> </managementContext> <persistenceAdapter> <kahaDB directory="${activemq.data}/kahadb"/> </persistenceAdapter> <systemUsage> <systemUsage> <memoryUsage> <memoryUsage percentOfJvmHeap="70" /> </memoryUsage> <storeUsage> <storeUsage limit="4 gb"/> </storeUsage> <tempUsage> <tempUsage limit="4 gb"/> </tempUsage> </systemUsage> </systemUsage> <transportConnectors> <transportConnector name="auto" uri="auto+nio://0.0.0.0:61616?maximumConnections=1000&amp;auto.protocols=default,stomp"/> </transportConnectors> <shutdownHooks> <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" /> </shutdownHooks> <plugins> ... security plugins config... </plugins> </broker> <import resource="jetty.xml"/> </beans> 

start args:

 /usr/java/default/bin/java -Xms256M -Xmx1G -Dorg.apache.activemq.UseDedicatedTaskRunner=false -XX:HeapDumpPath=/var/logs/heapDumps -XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=8162 -Dcom.sun.management.jmxremote.rmi.port=8162 -Dcom.sun.management.jmxremote.password.file=/opt/apache-activemq-5.13.0//conf/jmx.password -Dcom.sun.management.jmxremote.access.file=/opt/apache-activemq-5.13.0//conf/jmx.access -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote -Djava.awt.headless=true -Djava.io.tmpdir=/opt/apache-activemq-5.13.0//tmp -Dactivemq.classpath=/opt/apache-activemq-5.13.0//conf:/opt/apache-activemq-5.13.0//../lib/ -Dactivemq.home=/opt/activemq -Dactivemq.base=/opt/activemq -Dactivemq.conf=/opt/apache-activemq-5.13.0//conf -Dactivemq.data=/opt/apache-activemq-5.13.0//data -jar /opt/activemq/bin/activemq.jar start 

UPD: From Eclipse MemoryAnalizer:

 Leak Suspects 247,036 instances of "org.apache.activemq.command.ActiveMQBytesMessage", loaded by "java.net.URLClassLoader @ 0xc02e9470" occupy 811,943,360 (76.92%) bytes. 81 instances of "org.apache.activemq.broker.region.cursors.FilePendingMessageCursor", loaded by "java.net.URLClassLoader @ 0xc02e9470" occupy 146,604,368 (13.89%) bytes. 

UPD: Before the OOM error, there are several errors in the log:

 | ERROR | Could not accept connection from null: java.lang.IllegalStateException: Timer already cancelled. | org.apache.activemq.broker.TransportConnector | ActiveMQ BrokerService[cms-mq] Task-13707 | INFO | The connection to 'null' is taking a long time to shutdown. | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[cms-mq] Task-13738 

Will there be any help in debugging it. If necessary, can provide additional information.

+7
java out-of-memory activemq stomp
source share
2 answers

I reviewed the client code (ruby stomp client), and it turned out that under the heading "activemq.subscriptionName" there was no header for the client-id connection. When you set the activemq.subscriptionName subscription header, it means that you want your subscriber to be durable. But you must also set the connection header to "client-id", because otherwise it is auto-generated. When the client-identifier header is not set, we have a situation in which the broker cannot identify the stomp client by its client identifier when it is reconnected. As a result, there were many offline subscribers to the fake topic, and messages accumulated for each client-id => OOM error.

0
source share

You can guess that you flood the broker with messages from the manufacturer over STOMP and eventually blew up the broker's memory. You have disabled producer flow control, which can even lead to this with the JMS client by default, and STOMP is even easier to get into this situation, because by default there is no return to the manufacturer to provide a flow control mechanism, you must request a receipt for each shipment to get it.

To debug this, you need to start examining your broker logs, as well as the statistics of the destination and use through the console or other tool of your choice, to find out what the status of the broker is.

+2
source share

All Articles