You can write x / 2 * 2 , and the compiler will create very efficient code to clear the least significant bit if x is of unsigned type.
Conversely, you can write:
x = x & ~1;
Or perhaps less readable:
x = x & -2;
Or even
x = (x >> 1) << 1;
Or this too:
x = x - (x & 1);
Or this last one suggested by supercat works for positive values ββof all integer types and representations:
x = (x | 1) ^ 1;
All the above sentences work correctly for all unsigned integer types on 2 additional architectures. Whether the compiler will create the optimal code is a matter of configuration and implementation quality.
Note that x & (~1u) does not work if the type of x greater than unsigned int . This is a counter-intuitive trap. If you insist on using the unsigned constant, you should write x & ~(uintmax_t)1 , since even x & ~1ULL will fail if x is of a larger type than unsigned long long . Even worse, many platforms now have integer types larger than uintmax_t , such as __uint128_t .
Here is a small guideline:
typedef unsigned int T; T test1(T x) { return x / 2 * 2; } T test2(T x) { return x & ~1; } T test3(T x) { return x & -2; } T test4(T x) { return (x >> 1) << 1; } T test5(T x) { return x - (x & 1); } T test6(T x) {
As Ruslan showed, testing Godbolt Compiler Explorer shows that for all of the above alternatives to gcc -O1 get the same exact code for unsigned int , but changing type T to unsigned long long shows a different code for test1u .
chqrlie Oct 18 '17 at 9:36 on 2017-10-18 09:36
source share