if (x < 0 || x > MaxSize)
The comparison is performed by the instruction of the processor CMP (Compare). You want to take a look at the Agner Fog instruction table document (PDF), it lists the cost of the instructions. Find your processor back in the list, then find the CMP instruction.
For mine, Haswell, CMP takes 1 latency cycle and 0.25 bandwidth cycles.
Fractional costs like this can use an explanation; Haswell has 4 integer execution units that can execute instructions at the same time. When a program contains enough integer operations, such as CMP, without interdependence, they can perform all at the same time. In fact, the program is 4 times faster. You canβt always keep all 4 of them occupied at the same time as your code; in fact, this is quite rare. But in this case, you make 2 of them busy. Or, in other words, two comparisons take as much as one, 1 cycle.
There are other factors in the game that make the runtime identical. One thing that helps is that the processor can very well predict the branch, it can speculatively execute x > MaxSize , despite evaluating the short circuit. And in fact, this will lead to a result, since the branch is never taken.
And the real bottleneck in this code is array indexing, memory access is one of the slowest things a processor can do. Thus, the βfastβ version of the code is not accelerated, although it provides more features, allowing the processor to execute instructions at the same time. Today this is not a big opportunity, the processor has too many execution units to stay busy. Otherwise, the HyperThreading function works. In both cases, the processor fights at the same speed.
On my machine, I have to write code that takes more than 4 engines to make it slower. The stupid code is as follows:
if (x < 0 || x > MaxSize || x > 10000000 || x > 20000000 || x > 3000000) { outOfRange++; } else { inRange++; }
Using 5 comparisons, now I can make a difference, 61 versus 47 ms. Or, in other words, this is a way of counting the number of whole engines in a processor. Hehe :)
So, this is micro-optimization, which was probably used to pay off decades ago. This is no longer the case. Change your list of things to worry about :)