Does pypy help handle streams and sockets quickly compared to C hand written?
Not. This is usually the same or worse.
PyPy retains the global interpreter lock (GIL) that CPython has. This means that native threads cannot run Python code in parallel. Python threads also have extra semantics that are expensive. Greater synchronization surrounds starting, shutting down, and tracking threads of Python objects. In comparison, C threads run faster, are cheaper to use, and can run completely parallel.
Efficient socket handling requires minimizing the time that was not spent waiting for the next socket event. Since the PyPy thread model is still GIL-linked, this means that threads that return from blocking socket calls cannot act until they acquire a GIL. Equivalent C code is usually faster and may return to wait for socket events sooner.
Compared to regular python?
Yes. But not so much.
For the above reasons, PyPy, with the exception of random spikes due to JIT and other overhead, requires less processor time for equivalent code. Therefore, thread and socket processing is faster and more flexible.
I would just try, but the python code in question was written for a small cluster of computers on which I am not an admin. I ask here because my google attempts provided a comparison with citon, bottomless swallowing, etc., and I don’t want to trick the administrator about it if this is unlikely to work.
PyPy will only improve performance if your code is CPU related. PyPy is the fastest Python implementation that I know about what happens initially. You can explore some of the other implementations or consider writing C extensions if true parallelism streaming is your priority.
I really don't need pypy to be as good in C; I want to use it because right now the overhead of the translator completely overshadows the calculation I'm trying to do. I just need pypy to get me in the area of handwritten C.
Closing the performance gap with C is currently PyPy's biggest feature. I highly recommend you to try.
Matt joiner
source share