Is there any performance difference when using int versus int8_t

My main question is: Is there a difference between int and int8_t for runtime ?

In the framework that I work on, I often read code where some parameters are set as int8_t in the function, because "this particular parameter cannot be outside the range of -126.125."

In many places, int8_t used for communication protocol or for breaking a packet into many fields on an __attribute((packed)) struct .

But at some point it was basically put there because someone thought it would be better to use a type that more closely matches the size of the data, probably thinking ahead of the compiler.

Given that the code runs on Linux, compiled using gcc using glibc, and this memory or portability is not a problem, I am wondering if this is really a good idea in terms of performance.

My first impression comes from the rule “Trying to be smarter than the compiler is always a bad idea” (if you don’t know where and how you need to optimize).

However, I don’t know if using int8_t actual cost for performance (more tests and calculations according to the size of int8_t , more operations are required to ensure that the variable does not go beyond, etc.).), Or if it is what This improves performance.

I cannot read simple asm, so I did not compile test code in asm to find out which one is better.

I tried to find a related question, but all the discussions I found in int<size>_t compared to int are about portability, not performance.

Thanks for your input. Sample collections are explained or sources on this subject would be greatly appreciated.

+8
c types micro-optimization
source share
2 answers

int usually equivalent to the size of the registers on the CPU. C says that any smaller types must be converted to int before using operators on them.

These conversions (sign extension) can be costly.

 int8_t a=1, b=2, c=3; ... a = b + c; // This will translate to: a = (int8_t)((int)b + (int)c); 

If you need speed, int is a safe bet or use int_fast8_t (even safer). If the exact size is int8_t , use int8_t (if available).

+7
source share

when you talk about code performance, there are a few things to consider that affect this:

  • The architecture of the processor, depending on what types of data the processor supports (it supports 8-bit operations? 16 bits? 32 bits? Etc.). Compiler
  • working with a well-known compiler is not enough, you need to be familiar with it: they, as you write your code, affect the code that it generates.
  • data types and compiler built-in tools: they are always taken into account by the compiler when generating code using the correct data type (even signed or unsigned) can have a significant impact on performance.

    "Trying to be smarter than the compiler is always a bad idea" - in fact, it is not; remember that the compiler is written to optimize the general case, and you are interested in a specific case; It’s always useful to try and be smarter than the compiler.

Your question is really very important to me in order to give an answer "to the point" (i.e. what is better in performance). The only way to know for sure is to check the generated assembly code; at least count the number of cycles the code would execute in both cases. But you need to understand the code in order to understand how to help the compiler.

0
source share

All Articles