I have this simple code that helped me measure how classes are executed with __slots__ (taken from here ):
import timeit def test_slots(): class Obj(object): __slots__ = ('i', 'l') def __init__(self, i): self.i = i self.l = [] for i in xrange(1000): Obj(i) print timeit.Timer('test_slots()', 'from __main__ import test_slots').timeit(10000)
If I ran it through python2.7 - I would get something about 6 seconds - fine, it's really faster (and also more efficient with memory) than without slots.
But if I run the code under PyPy (using 2.2.1 - 64 bit for Mac OS / X), it will start using 100% of the CPU and will never return (waiting for minutes - no result).
What's happening? Should I use __slots__ under PyPy?
Here's what happens if I pass another number of timeit() :
timeit(10) - 0.067s timeit(100) - 0.5s timeit(1000) - 19.5s timeit(10000) - ? (probably more than a Game of Thrones episode)
Thanks in advance.
Note that the same behavior occurs if I use namedtuple s:
import collections import timeit def test_namedtuples(): Obj = collections.namedtuple('Obj', 'i l') for i in xrange(1000): Obj(i, []) print timeit.Timer('test_namedtuples()', 'from __main__ import test_namedtuples').timeit(10000)
performance python slots pypy
alecxe
source share