Is GPGPU ready to use and use prototypes, or do you still consider it primarily a research / bleeding technology? I work in the field of computational biology and begin to attract attention from more computer-oriented people in this area, but most of the work seems to carry well-known algorithms. Porting the algorithm itself is a research project, and the vast majority of people in this area know little about it.
I am doing some pretty intense computing projects on regular multi-core processors. I wonder how close GPGPU is to being able to use it enough to prototype new algorithms and for everyday use of products. From reading Wikipedia, I get the impression that the programming model is strange (highly SIMD) and somewhat limited (there are no recursion or virtual functions, although these restrictions are slowly removed, languages are not higher than C level or a limited subset of C ++), and that there are several competing ones incompatible standards. I also get the impression that, unlike the usual multi-core, fine-grained parallelism is the only game in the city. Basic library functions should be rewritten. Unlike the usual multicore, you cannot get huge accelerations by simply parallelizing the external loop of your program and calling the functions of the old library in the old school.
How serious are these limitations in practice? Is GPGPU ready for serious use now? If not, how long did you realize it would take?
Edit: One of the main points I'm trying to wrap up is how much the programming model is different from a regular multi-core processor with many really slow cores.
Edit # 2: I guess I summarized the answers that were given to me - that GPGPU is practical enough for early adopters in niches, which it fits very well, but still bleeds enough to not be considered a “standard” tool, such as multi-core or distributed parallelism, even in those niches where performance is important.
source share