Can a C ++ implementation theoretically parallelize the evaluation of two function arguments?

Given the following function call:

f(g(), h()) 

since the order of evaluating the arguments of the function is unspecified (still takes place in C ++ 11, as far as I know), can theoretically execute a parallel implementation of g() and h() ?

Such parallelization could only hit, g and h , as you know, quite trivial (in the most obvious case, accessing only local data to their bodies) so as not to introduce concurrency problems, but apart from this restriction, I see nothing to it to prohibit.

So, does the standard allow it? Even if only by the as-if rule?

(In this answer , Mankarse claims otherwise, however he does not quote the standard, and my reading of [expr.call] has not shown any obvious wording.)

+68
c ++ language-lawyer
Nov 18 '12 at 19:08
source share
4 answers

The requirement comes from [intro.execution]/15 :

... When calling a function ... Each estimate in the calling function (including other function calls), which otherwise is not secreted separately before or after the body of the called function is executed, is indefinitely ordered with respect to the execution of the called function. [Footnote: In other words, the execution of functions does not alternate with each other. ].

Thus, any execution of the body g() must be indefinitely ordered (i.e., not overlapping) with the estimate h() (since h() is an expression in the calling function).

The critical point here is that g() and h() are both function calls.

(Of course, the as-if rule means that the possibility cannot be completely ruled out, but it should never happen in such a way that it could affect the observed behavior of the program. In the best case, such an implementation would simply change the performance of the code.)

+42
Nov 18 '12 at 23:50
source share

While you canโ€™t say, no matter what the compiler does to evaluate these functions, it depends entirely on the compiler. Obviously, function evaluation cannot include any access to shared mutable data, as this will lead to data collection. The basic guiding principle is the โ€œas ifโ€ steering and the main observable operations, i.e. Access volatile data, I / O operations, access to atomic data, etc. The corresponding section is 1.9 [intro.execution].

+16
Nov 18 '12 at 19:25
source share

If the compiler does not know exactly what g() , h() , and everything they call does.

Two expressions are function calls that may have unknown side effects. Therefore, their parallelization can lead to a failure of data on these side effects. Since the C ++ standard does not allow evaluating arguments in order to arrange data on any side effects of expressions, the compiler can parallelize them only if it knows that such a data race is not possible.

This means that you perform any function and look at what they do and / or call, then track these functions, etc. In general, this is not possible.

+3
Nov 18 '12 at 19:24
source share

The easy answer: when functions are sequenced, even if they are vague, there is no possibility for the race state between them, which is not true if they are parallelized. Even a couple of "trivial" functions from one line can do this.

 void g() { *p = *p + 1; } void h() { *p = *p - 1; } 

If p is the name shared by g and h , then calling g and h successively in any order will cause the value pointed to by p to not change. If they are parallelized, reading *p and assigning it can be alternated between the two:

  • g reads *p and finds the value 1.
  • f reads *p and also finds the value 1.
  • g writes 2 to *p .
  • f , still using the value 1, which he read before writing 0 to *p .

Thus, the parallelization behavior is different.

+1
Nov 21 '12 at 6:23
source share



All Articles