I tried using tornado-redis (which basically is a fork brΓΌkva slightly modified to work with the tornado.gen interface instead of adisp) to deliver events using redis' pubsub .
So, I wrote a little script to test things inspired by this example .
import os from tornado import ioloop, gen import tornadoredis print os.getpid() def on_message(msg): print msg @gen.engine def listen(): c = tornadoredis.Client() c.connect() yield gen.Task(c.subscribe, 'channel') c.listen(on_message) listen() ioloop.IOLoop.instance().start()
Unfortunately, when I PUBLISH ed through redis-cli , memory usage continued to grow.
To profile memory usage, I first tried using guppy-pe , but it would not work under python 2.7 (yes, I even tried the trunk), so I went back to pympler .
import os from pympler import tracker from tornado import ioloop, gen import tornadoredis print os.getpid() class MessageHandler(object): def __init__(self): self.memory_tracker = tracker.SummaryTracker() def on_message(self, msg): self.memory_tracker.print_diff() @gen.engine def listen(): c = tornadoredis.Client() c.connect() yield gen.Task(c.subscribe, 'channel') c.listen(MessageHandler().on_message) listen() ioloop.IOLoop.instance().start()
Now every time I PUBLISH ed, I could see that some objects were never released:
types | # objects | total size ===================================================== | =========== | ============ dict | 32 | 14.75 KB tuple | 41 | 3.66 KB set | 8 | 1.81 KB instancemethod | 16 | 1.25 KB cell | 22 | 1.20 KB function (handle_exception) | 8 | 960 B function (inner) | 7 | 840 B generator | 8 | 640 B <class 'tornado.gen.Task | 8 | 512 B <class 'tornado.gen.Runner | 8 | 512 B <class 'tornado.stack_context.ExceptionStackContext | 8 | 512 B list | 3 | 504 B str | 7 | 353 B int | 7 | 168 B builtin_function_or_method | 2 | 144 B types |
Now that I know that a memory leak is actually occurring, how can I track where these objects are created? Think I should start here ?
Simon charette
source share