GPGPU Programming in Python

I want to run GPGPU in Python. Should I start with piopenklya or clitoris? What's the difference?

+4
source share
4 answers

OpenCL consists of two parts. There is a node side, which is usually written in C, and a device side, which is written in derivative from C, defined by OpenCL. This code compiles to the device (usually a graphics processor) at runtime.

CLyther is trying to distract everything. You write host-side code in Python. You write device-side code in a subset of Python (similar to Cython). It is very high level and easy to use.

PyOpenCL is a relatively low-level binding to the Python OpenCL API. The device-side code is written in a subset of OpenCL C99. This gives you full access and full control of OpenCL. Very little distracted.

I have limited experience with both, but I got the impression that when both are mature I would prefer to use Clyther for most projects. This is more user friendly, which means that you are more likely to use it and use it more. It's also easier to move code between Clyther and Python than PyOpenCL and Python, so code maintenance and refactoring should be simpler. For projects very important for productivity I would prefer PyOpenCL. This gives you a lower level of control and fewer layers between you and the equipment. The ultimate performance possible should be better with PyOpenCL than with Clyther.

I do not know if this will go on forever. It is likely that PyOpenCL will ultimately add higher level constructs and that Clyther will ultimately add lower level control. In an ideal world, Clyther developers could move the kernel so that it was built on top of PyOpenCL, so we would not have to choose and avoid duplication of labor. I doubt it will ever happen.

PyOpenCL currently looks more mature than Clyther. It was launched first and less ambitious in scope. It has better documentation than Clyther, and appears to have a larger user community. Both are pretty similar in code size. Clyther is about 4KLOCs Python and 4KLOCs C. PyOpenCL is about 7KLOC Python code and 9KLOCs C ++ code. This is approximate (including assembly systems, examples, etc.), therefore it cannot be considered as implying anything other than approximate equality.

+11
source

It seems to me that PyOpenCL is closer to C bindings for OpenCL than CLyther .

This means that if you already know OpenCL or plan to implement ports from other languages ​​in Python, then PyOpenCL may be for you. CLyther, on the other hand, seems more "pythonic" than PyOpenCL, so if you are more familiar with Python, then the idioms you use may be easier to understand.

Both of them are in beta, so you may not have all the features that you may need, and there may be errors in both.

Good luck

+2
source

CLyther contains C-level bindings similar to OpenCL and PyOpenCL.

clyther is "pythonic" in that it also allows you to pass and / or use python functions as openCL device / kernel functions.

Inline in your Python code you can write

@kernel @bind('global_work_size' ,'a.size') @bind('local_work_size' , 1) def sum(a,b,ret): i = clrt.get_global_id(0) ret[i] = a[i] + b[i] sum(clarray1,clarray2,clarray3) 
0
source

I find PyOpenCl similar to PyCuda, which allows you to do many optimizations on the kernel side, which is an interesting part of GPGPU programming.

This is the core in C, while the host code is pythonic:

 mod = SourceModule(""" __global__ void multiply_them(float *dest, float *a, float *b) { const int i = threadIdx.x; dest[i] = a[i] * b[i]; } """) 
0
source

All Articles