How can I track / fix a memory leak in tornado-redis using a pimpler?

I tried using tornado-redis (which basically is a fork brΓΌkva slightly modified to work with the tornado.gen interface instead of adisp) to deliver events using redis' pubsub .

So, I wrote a little script to test things inspired by this example .

import os from tornado import ioloop, gen import tornadoredis print os.getpid() def on_message(msg): print msg @gen.engine def listen(): c = tornadoredis.Client() c.connect() yield gen.Task(c.subscribe, 'channel') c.listen(on_message) listen() ioloop.IOLoop.instance().start() 

Unfortunately, when I PUBLISH ed through redis-cli , memory usage continued to grow.

To profile memory usage, I first tried using guppy-pe , but it would not work under python 2.7 (yes, I even tried the trunk), so I went back to pympler .

 import os from pympler import tracker from tornado import ioloop, gen import tornadoredis print os.getpid() class MessageHandler(object): def __init__(self): self.memory_tracker = tracker.SummaryTracker() def on_message(self, msg): self.memory_tracker.print_diff() @gen.engine def listen(): c = tornadoredis.Client() c.connect() yield gen.Task(c.subscribe, 'channel') c.listen(MessageHandler().on_message) listen() ioloop.IOLoop.instance().start() 

Now every time I PUBLISH ed, I could see that some objects were never released:

  types | # objects | total size ===================================================== | =========== | ============ dict | 32 | 14.75 KB tuple | 41 | 3.66 KB set | 8 | 1.81 KB instancemethod | 16 | 1.25 KB cell | 22 | 1.20 KB function (handle_exception) | 8 | 960 B function (inner) | 7 | 840 B generator | 8 | 640 B <class 'tornado.gen.Task | 8 | 512 B <class 'tornado.gen.Runner | 8 | 512 B <class 'tornado.stack_context.ExceptionStackContext | 8 | 512 B list | 3 | 504 B str | 7 | 353 B int | 7 | 168 B builtin_function_or_method | 2 | 144 B types | # objects | total size ===================================================== | =========== | ============ dict | 32 | 14.75 KB tuple | 42 | 4.23 KB set | 8 | 1.81 KB cell | 24 | 1.31 KB instancemethod | 16 | 1.25 KB function (handle_exception) | 8 | 960 B function (inner) | 8 | 960 B generator | 8 | 640 B <class 'tornado.gen.Task | 8 | 512 B <class 'tornado.gen.Runner | 8 | 512 B <class 'tornado.stack_context.ExceptionStackContext | 8 | 512 B object | 8 | 128 B str | 2 | 116 B int | 1 | 24 B types | # objects | total size ===================================================== | =========== | ============ dict | 32 | 14.75 KB tuple | 42 | 4.73 KB set | 8 | 1.81 KB cell | 24 | 1.31 KB instancemethod | 16 | 1.25 KB function (handle_exception) | 8 | 960 B function (inner) | 8 | 960 B generator | 8 | 640 B <class 'tornado.gen.Task | 8 | 512 B <class 'tornado.gen.Runner | 8 | 512 B <class 'tornado.stack_context.ExceptionStackContext | 8 | 512 B list | 0 | 240 B object | 8 | 128 B int | -1 | -24 B str | 0 | -34 B 

Now that I know that a memory leak is actually occurring, how can I track where these objects are created? Think I should start here ?

+7
source share
1 answer

Updating Tornado to version 2.3 should solve this problem.

I had the same problem when ExceptionStackContext leaked very quickly. This was due to this error message: https://github.com/facebook/tornado/issues/507 and fixed in this commit: https://github.com/facebook/tornado/commit/57a3f83fc6b6fa4d9c207dc078a337260863ff99 . Upgrading to 2.3 took care of the problem for me.

+4
source

All Articles