My suspicion is that it is useful to use call-by-reference as soon as the size of the primitive type in bytes exceeds the size of the address value. Even if the difference is small, I would like to take advantage of this because I often call some of these functions.
Hunches-based performance training runs about 0% of the time in C ++ (it feels like I have statistics, it usually works ...)
It is const T& that const T& will be less than T if sizeof(T) > sizeof(ptr) , therefore usually 32 bits or 64, depending on the system.
Now ask yourself:
1) How many built-in types are more than 64 bits?
2) Does 32-bit code copy to make the code less understandable? If your function becomes much faster because you did not copy the 32-bit value to it, maybe it is not so much?
3) Are you really that smart? (spoiler warning: no). Look at this great answer for the reason that it is almost always a bad idea: fooobar.com/questions/111666 / ...
Ultimately, just go by value. If after (thorough) profiling you find that some function is a bottleneck, and all the other optimizations that you tried are not enough (and you should try most of them before), pass-by-const-reference.
Then let's see that it does not change anything. roll around and cry.
source share