Recently, I have been using C ++ 11 again, and where I would have used iterators in the past, now I use a range for loops whenever possible:
std::vector<int> coll(10); std::generate(coll.begin(), coll.end(), []() { return rand(); } );
C ++ 03:
for (std::vector<int>::const_iterator it = coll.begin(); it != coll.end(); ++it) { foo_func(*it); }
C ++ 11:
for (auto e : coll) { foo_func(e); }
But what if the type of the collection item is a template parameter? foo_func() is likely to be overloaded to pass complex (= expensive copies) types by constant link and simple by value:
foo_func(const BigType& e) { ... }; foo_func(int e) { ... };
I did not think about this when I used the C ++ 03 style code above. I would repeat the same thing, and after dereferencing, const_iterator creates a constant link, everything was fine. But, using a C ++ 11 loop for a loop, I need to use a loop contour variable to get the same behavior:
for (const auto& e : coll) { foo_func(e); }
And suddenly, I was no longer sure if this did not introduce unnecessary assembly instructions, if auto was a simple type (for example, a pointer behind the scene for implementing a link).
But compiling the sample application has confirmed that there is no overhead for simple types, and this seems like a general way to use ranges for loops in templates. If this were not the case, boost :: call_traits :: param_type would be correct.
Question: Are there any guarantees in the standard?
(I understand that the problem is not related to range-based loops, as well as when using const_iterators.)