I wanted to know if it is possible to execute multiple threads on the GPU and the remaining threads on the processor?
Yes
In other words, if I start 100 threads and assume that I have an 8-core processor, is it possible that 8 threads out of 100 threads will run on the processor and the remaining 92 threads will run on the GPU?
Not. This description assumes that you will consider the GPU and CPU as one computing resource. You cannot do this.
This does not mean that you cannot work with the same task.
- GPUs and CPUs will be considered separate OpenCL devices.
- You can write code that can talk to multiple devices.
- You can compile the same kernel for multiple devices.
- You can ask multiple devices to do the job at the same time.
... but ...
- None of them are automatic.
- OpenCL will not break a single NDRange (or equivalent) call between multiple devices.
- This means that you have to schedule tasks between the two devices yourself.
- There will be a rather large discrepancy in speed, therefore, to maintain optimality, it will take more than "92 here, 8 there."
What I found works better when the processor is working on a different task while the GPU is running. Perhaps preparing the next part of the work for the GPU or subsequent processing of the results from the GPU. Sometimes this is regular code. Sometimes it's OpenCL.
source share