I searched a bit, and in fact I really have no answer.
When I program on embedded devices with limited memory, I'm usually used to using the smallest type of integral / floating point that will do the job, for example, if I know that the counter will always be between zero and 255, I will declare it as uint8_t .
However, in less restricted environments, I use only int for everything that matches Google C ++ Styleguide . When I look at existing code, this is often done like this.
To be clear, I get a rationale for this, (Google explains it very well), but I do not quite clearly explain why I am doing something in the first place.
It seems to me that reducing the memory size of your program, even on a system in which you do not care about memory usage, would be good for overall speed, since logically less general data could mean that it fits more into the processor cache.
However, it is complicating that the compilers will automatically place the data and align it to the borders so that it can be loaded in one bus cycle. I believe that it comes down to whether the compilers are smart enough to take, say, two 32-bit integers and combine them into one 64-bit block, and individually defer each of them to 64 bits.
I suppose whether the processor itself can use this also depends on its exact internal components, but the idea that optimizing memory size improves performance, especially on new processors, is confirmed by the fact that the Linux kernel has been running for some time for the option gcc -0s for better performance.
So, I think my question is why the Google method seems to be much more common in real code. Is there any hidden cost here that I am missing?