Using __slots__ under PyPy

I have this simple code that helped me measure how classes are executed with __slots__ (taken from here ):

 import timeit def test_slots(): class Obj(object): __slots__ = ('i', 'l') def __init__(self, i): self.i = i self.l = [] for i in xrange(1000): Obj(i) print timeit.Timer('test_slots()', 'from __main__ import test_slots').timeit(10000) 

If I ran it through python2.7 - I would get something about 6 seconds - fine, it's really faster (and also more efficient with memory) than without slots.

But if I run the code under PyPy (using 2.2.1 - 64 bit for Mac OS / X), it will start using 100% of the CPU and will never return (waiting for minutes - no result).

What's happening? Should I use __slots__ under PyPy?

Here's what happens if I pass another number of timeit() :

 timeit(10) - 0.067s timeit(100) - 0.5s timeit(1000) - 19.5s timeit(10000) - ? (probably more than a Game of Thrones episode) 

Thanks in advance.


Note that the same behavior occurs if I use namedtuple s:

 import collections import timeit def test_namedtuples(): Obj = collections.namedtuple('Obj', 'i l') for i in xrange(1000): Obj(i, []) print timeit.Timer('test_namedtuples()', 'from __main__ import test_namedtuples').timeit(10000) 
+8
performance python slots pypy
source share
2 answers

In each of the 10,000 iterations of the timeit code timeit class is recreated from scratch. Creating classes is probably not an optimized operation in PyPy; Worse, this will probably drop all the optimizations that JIT learned about the previous incarnation of the class. PyPy tends to be slow until the JIT warms up, so actions requiring multiple warm-ups will kill your performance.

The solution here, of course, is simply to move the class definition outside of the code being tested.

+11
source share

To directly answer the question in the title: __slots__ pointless (but not __slots__ ) performance in PyPy.

+8
source share

All Articles