Memory leak rabbitmq + celery?

I enjoy working with celery + rabbitmq + django for a month or so in production. Yesterday I decided to switch from celery from 2.1.4 to 2.2.4, and now the rabbit is out of control. After some time, my nodes are no longer recognized by evcam, and beam.smp memory consumption starts to increase ... slowly (100% CPU usage).

I can run rabbitmqctl list_connections and see that there is nothing unusual (just my one node test). I see in rabbitmqctl list_queues -p <VHOST> that there are no messages except the beats from my node test. If I let the process run for several hours, it automatically shuts down the machine.

I tried to clear the various queues using camqadm no avail, and stop_app just freezes. The only way I found to fix was kill -9 beam.smp (and all related processes) and force_reset on my rabbitmq server.

I have no idea how to debug this. It seems that nothing new is happening, like new messages, etc. Has anyone come across this before? Any ideas? What other information should I look at?

+6
django celery rabbitmq
source share
2 answers

The celery developer told me 3 months ago that versions of RabbitMQ after 2.1.1 were affected by a memory leak, with processor peaks. I am still using version 2.1.1 and I do not have this problem.

http://www.rabbitmq.com/releases/rabbitmq-server/v2.1.1/

It is also true that the celery 2.2.4 version introduced some memory problem, but if you upgrade celery 2.2.5, most of them will be resolved.

http://docs.celeryproject.org/en/v2.2.5/changelog.html#fixes

Hope this helps

+4
source share

It may not be useful, but recently we discovered a memory leak in the Java virtual machine associated with the extensions used to control garbage collection. Your heart rate monitor may be triggering these methods, resulting in a memory leak.

The problem is described here: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7066129

+1
source share

All Articles