My main question is: Is there a difference between int and int8_t for runtime ?
In the framework that I work on, I often read code where some parameters are set as int8_t in the function, because "this particular parameter cannot be outside the range of -126.125."
In many places, int8_t used for communication protocol or for breaking a packet into many fields on an __attribute((packed)) struct .
But at some point it was basically put there because someone thought it would be better to use a type that more closely matches the size of the data, probably thinking ahead of the compiler.
Given that the code runs on Linux, compiled using gcc using glibc, and this memory or portability is not a problem, I am wondering if this is really a good idea in terms of performance.
My first impression comes from the rule “Trying to be smarter than the compiler is always a bad idea” (if you don’t know where and how you need to optimize).
However, I don’t know if using int8_t actual cost for performance (more tests and calculations according to the size of int8_t , more operations are required to ensure that the variable does not go beyond, etc.).), Or if it is what This improves performance.
I cannot read simple asm, so I did not compile test code in asm to find out which one is better.
I tried to find a related question, but all the discussions I found in int<size>_t compared to int are about portability, not performance.
Thanks for your input. Sample collections are explained or sources on this subject would be greatly appreciated.
c types micro-optimization
Daindwarf
source share