void reset_if_true(void*& ptr, bool cond) { if (cond) ptr = nullptr; }
The naive decision will undoubtedly be the fastest in most cases. Although it has a branch that can be slow on modern pipelined processors, it is only slow if the branch is incorrectly predicted. Since branch predictors are very good these days, if the value of cond extremely unpredictable, it is likely that a simple conditional branch is the fastest way to write code.
And if this is not so, a good compiler should know this and be able to optimize the code for something better, given the target architecture. What goes to gnasher729 point : just write the code in a simple way and leave the optimization in the hands of the optimizer.
Although this is good advice overall, sometimes it gets too far. If you really need the speed of this code, you need to check and see what the compiler actually does. Check the code of the object that it generates, and make sure that it is reasonable and that the function code becomes nested.
Such a study can be quite revealing. For example, consider x86-64, where branches can be quite expensive in cases where deviation from branching (this is really the only time when this is an interesting question, so let's assume that cond completely unpredictable). Almost all compilers are going to create the following for a naive implementation:
reset_if_true(void*&, bool): test sil, sil ; test 'cond' je CondIsFalse mov QWORD PTR [rdi], 0 ; set 'ptr' to nullptr, and fall through CondIsFalse: ret
This is about as complicated code as you might imagine. But if you put a branch predictor in a pathological case, this may turn out to be slower than using conditional move:
reset_if_true(void*&, bool): xor eax, eax ; pre-zero the register RAX test sil, sil ; test 'cond' cmove rax, QWORD PTR [rdi] ; if 'cond' is false, set the register RAX to 'ptr' mov QWORD PTR [rdi], rax ; set 'ptr' to the value in the register RAX ret ; (which is either 'ptr' or 0)
Conditional moves have a relatively high delay, so they are significantly slower than a well-predicted branch, but they can be faster than a completely unpredictable branch. You would expect the compiler to learn about this when targeting the x86 architecture, but would not (at least in this simple example) have knowledge of cond predictability. It assumes a simple case that branch prediction will be on your side and will generate code A instead of code B.
If you decide that you want to cause the compiler to generate unallocated code due to an unpredictable state, you can try the following:
void reset_if_true_alt(void*& ptr, bool cond) { ptr = (cond) ? nullptr : ptr; }
This succeeds in convincing modern versions of Clang to generate branching code B, but this is complete pessimization in GCC and MSVC. If you did not check the generated assembly, you would not know. If you want to force GCC and MSVC to generate unpacking code, you will have to work harder. For example, you can use the option posted in the question:
void reset_if_true(void*& ptr, bool cond) { void* p[] = { ptr, nullptr }; ptr = p[cond]; }
When targeting on x86, all compilers generate non-redistributable code for this, but this is not particularly beautiful code. In fact, not one of them creates conditional moves. Instead, you get multiple memory accesses to build an array:
reset_if_true_alt(void*&, bool): mov rax, QWORD PTR [rdi] movzx esi, sil mov QWORD PTR [rsp-16], 0 mov QWORD PTR [rsp-24], rax mov rax, QWORD PTR [rsp-24+rsi*8] mov QWORD PTR [rdi], rax ret
Ugly and probably very inefficient. I would predict that he gives a conditional version of the transition for his money, even if the branch is incorrectly predicted. Of course, you would need to check this, of course, but this is probably not a very good choice.
If you were still desperate to eliminate a branch on MSVC or GCC, you would need to do something uglier, including reinterpreting the pointer bits and twisting them. Something like:
void reset_if_true_alt(void*& ptr, bool cond) { std::uintptr_t p = reinterpret_cast<std::uintptr_t&>(ptr); p &= -(!cond); ptr = reinterpret_cast<void*>(p); }
This will give you the following:
reset_if_true_alt(void*&, bool): xor eax, eax test sil, sil sete al neg eax cdqe and QWORD PTR [rdi], rax ret
Again, here we have more instructions than a simple branch, but at least they are relatively low-cost instructions. A realistic data test will tell you whether the tradeoff is worth it. And give you the rationale that you need to put in the comment if you are going to actually register a code like this.
As soon as I walked down the tiny rabbit hole, I was able to get MSVC and GCC to use conditional move instructions. Apparently, they did not do this optimization because we were dealing with a pointer:
void reset_if_true_alt(void*& ptr, bool cond) { std::uintptr_t p = reinterpret_cast<std::uintptr_t&>(ptr); ptr = reinterpret_cast<void*>(cond ? 0 : p); }
reset_if_true_alt(void*&, bool): mov rax, QWORD PTR [rdi] xor edx, edx test sil, sil cmovne rax, rdx mov QWORD PTR [rdi], rax ret
Given CMOVNE latency and a similar amount of instructions, I'm not sure if this will really be faster than the previous version. The test you used will tell you if it was.
Similarly, if we collapse the condition, we save one memory access:
void reset_if_true_alt(void*& ptr, bool cond) { std::uintptr_t c = (cond ? 0 : -1); reinterpret_cast<std::uintptr_t&>(ptr) &= c; }
reset_if_true_alt(void*&, bool): xor esi, 1 movzx esi, sil neg rsi and QWORD PTR [rdi], rsi ret
(that GCC. MSVC does something a little different, preferring its characteristic sequence of commands neg , sbb , neg and dec , but the two are morally equivalent. Clang converts it to the same conditional jump that we saw that it generates the above. ) This may be the best code if we need to avoid branches, given that it generates reasonable output for all tested compilers while maintaining (some) readability in the source code.