How ugly is the API for the GP-GPU?

I am discussing whether to study GP-GPU stuff like CUDA or postpone it. My problem area (bioinformatics) is such that it would be good to know, since many of our problems have massive parallelism, but most people in this area, of course, do not know this. My question is, how difficult is it to use the API for CUDA and other GP-GPU technologies in practice? Is it extremely painful, or is it most of the complex encapsulated? Is this like "normal" programming or abstracting the complexity of running your code on a graphics card that is leaking to nonexistent?

+3
source share
4 answers

In CUDA, you write in C, but you must know exactly what you are doing to achieve maximum performance. The concepts are not and should not be distracted, since the GPU works differently . The same is true for CPU SIMD instructions such as SSE. You, from a higher level, should know what you want to do with parallel installations, and use it in the best way. You must fix the problem in order to use parallelism effectively. You should know how the GPU performs processing (SIMD style) and tries to minimize transitions.

As long as you use the C syntax, it is not distracted.

However, this is much better than writing shaders in HLSL!

+2
source

GPGPU, OpenCL.

API GPGPU . CUDA NVidia gpus. AMD Stream SDK CUDA , , , CUDA, AMD (ATI) gpus.

Microsoft DirectX 11 compute shader gpu, , , Windows Vista Windows 7, Linux, Windows XP. " " DirectX SDK.

OpenCL , , GPU, , , CUDA .

( : -)

+7

, , . Cell Map/Reduce.

+2

. , , , , MPI, , . , , , , . , - , . CUDA NVIDA. BMS- : CUDA- Smith-Waterman

0
source

All Articles