I am testing the following code:
#include <iostream> #include <vector> #include <algorithm> #include <ctime> int main(int argc, char* argv[]) { std::vector<int> v(10000000); clock_t then = clock(); if(argc <= 1) std::for_each(v.begin(), v.end(), [](int& it){ it = 10098; }); else for(auto it = v.begin(); it != v.end(); ++it) *it = 98775; std::cout << clock() - then << "\n"; return 0; }
I compile it with g ++ 4.6 without any optimization flags and this is what I get:
[ javadyan@myhost experiments]$ ./a.out 260000 [ javadyan@myhost experiments]$ ./a.out aaa 330000 [ javadyan@myhost experiments]$
Using optimization -O1 gives the following (not surprising) results:
[ javadyan@myhost experiments]$ ./a.out 20000 [ javadyan@myhost experiments]$ ./a.out aaa 20000
I use Linux 3.0 on a 2 GHz dual-core laptop, if that matters.
I am wondering how in a program compiled without any optimizations, calling for_each with a lambda function could consume less hours than a simple loop? Shouldn't there even be a slight overhead from calling an anonymous function? Is there any documentation on how this code looks like
std::for_each(v.begin(), v.end(), [](int& it){ it = 10098; });
is g ++ handled? What is the behavior of other popular compilers in this case?
UPDATE
I did not think that it
in the second expression compares with v.end()
at each iteration. However, a fixed for loop consumes less hours than for_each. However, I'm still wondering how the compiler optimizes for_each when the -O1 flag is used.
user500944
source share