Why do parallel programming books always ignore parallelism data?

Over the past few years, there has been a significant shift towards parallel programming through systems such as OpenCL and CUDA, and yet books published even over the past six months do not even mention the topic of parallel programming of data.

This is not suitable for every problem, but it seems that there is a significant gap that is not addressed.

+4
source share
4 answers

First of all, I will point out that parallel programming is not necessarily synonymous with parallel programming. Parallel programming is the creation of applications from tasks related to communication. For example, a dialog box can interact with each control implemented as a separate task. Parallel programming, on the other hand, is clearly related to extending the solution of a computational problem to more than one piece of execution hardware, essentially always for some reason of performance (note: even too small RAM is the reason for performance when an alternative replaces.

So, I have to ask in return: what books do you mean? Is it about parallel programming (I have several of them, there is a lot of interesting theory there) or about parallel programming?

If they really concern parallel programming, I will make a few remarks:

  • CUDA is a fast-moving target, and since its release. A book written about it today would be half obsolete by the time it turned it into print.
  • The OpenCL standard was released a little less than a year ago. Stable implementations have appeared in the last 8 months or so. There was simply not enough time to get a written book, not to mention revised and published.
  • OpenMP is covered in at least a few parallel programming tutorials that I have used. Prior to version 2 (v3 had just been released), it was almost all about data parallel programming.
+2
source

I think that those who work with parallel computing today usually come from the field of cluster computing. OpenCL and CUDA use GPUs that have more or less inadvertently turned into general-purpose processors along with the development of more sophisticated graphics rendering algorithms.

Nevertheless, people with graphics and high-performance computing people have been “discovering” each other for some time, and many or studies do this using GPUs for general-purpose computers.

+1
source

"always" a little strong; there are resources ( example ) that include parallelism themes.

0
source

Hillis's classic book Connection Machine was all parallelism data. This is one of my favorites.

0
source

All Articles