In short, don't worry about the fact. But keep worrying.
If you want to have performance, you need to use compiler performance optimization , which can work against common sense. It should be remembered that different compilers can compile code in different ways, and they themselves have different types of optimization. If we talk about the g++ and talk about maximizing its optimization level using -Ofast or at least the -O3 flag, in my experience, it can compile the long type into code with even greater performance than any unsigned type, or even just int
This is from my own experience, and I recommend that you first write a complete program and take care of such things only after that, when you have real code on hand, and you can compile it with optimization to try and choose the types that are actually better total work. This is also a very good general suggestion on optimizing code for performance, write quickly, try compiling with optimization, tune everything to see what works best. And you should also try using different compilers to compile your program and choose which one displays the most efficient machine code.
An optimized multi-threaded linear algebra calculation program can easily have a performance difference> 10x precisely optimized and non-optimized. So it matters.
The output of the optimizer in many cases contradicts the logic. For example, I had a case when the difference between a[x]+=b and a[x]=b changed the program execution time by almost 2x. And no, a[x]=b not faster.
Here, for example, NVidia states that to program its GPUs:
Note. As already recommended, practice, signed arithmetic should be preferable to unsigned arithmetic, where this is possibly the best bandwidth on SMM. The C language standard places more restriction on overflow behavior for unsigned math, limiting the compiler to optimization options.
Γhor MΓ© Aug 10 '16 at 8:01 2016-08-10 08:01
source share