Suppose the following operations are performed:
int i=0,j=0; i++; i++; i++; j++; j++; j++;
Ignoring at the time that the three increments are likely to be optimized by the compiler to one +=3 , you will get a higher throughput of the processor pipeline if you reorder the operations as
i++; j++; i++; j++; i++; j++;
since j++ does not need to wait for the result of i++ , while in the previous case, most instructions had data dependency on the previous instruction. In more complex calculations, where there is no easy way to reduce the number of instructions to execute, the compiler can still look at the data dependencies and reorder the instructions so that the command depending on the result of the earlier instruction is as far away from it as possible.
Another example of such optimization is that you are dealing with pure functions . Looking again at a simple example, suppose you have a pure function f(int x) , which you summarize over a loop.
int tot = 0; int x;//something known only at runtime for(int i = 0; i < 100; i++) tot += f(x);
Since f is a pure function, the compiler can reorder calls to it as it pleases. In particular, it can convert this loop to
int tot = 0; int x;//something known only at runtime int fval = f(x); for(int i = 0; i < 100; i++) tot += fval;
Pradhan
source share