This is a lie.
Neither gcc nor any other compiler is capable of reading object code, "compiling" it and creating object code that runs faster.
The closest thing is feedback compilation, in which you first compile a program with tools (e.g. gcc --fprofile-generate ), run this program by creating a startup data file (e.g. foo.gcda ), and then compile the program again using the same source code and data file as the compiler entry (e.g. gcc --fprofile-use ). This can lead to rather modest accelerations, usually from 5% to 10% in my experience.
Suppose you have a long chain of 50 if β¦ else if constructs (which cannot be restructured like switch ). This often happens, for example, in Monte Carlo simulations. If you are a fairly experienced programmer, you will probably order them so that the branch appears most often. The idea is that while you work, you donβt spend time reviewing the 30 less likely branches before considering the most likely ones. Moreover, you will try to arrange these branches with probability to the least probable, so that on average, the least number of branch tests is performed before the correct one is found.
Note that the compiler has no reason to organize these branches, because information that is more likely than the other is simply not in the source code, so it is best to draw the branches in the original order.
With classical feedback processing, you first create an instrumental version of the executable file, which (when it is run) records how many times each branch is taken (or not) to the data file. The second time you compile, the compiler has empirical data from the runtime (which is usually missing) that you can use to reorder the tests and insert branch hints that will make the code run faster ... at least with workloads like profiled test program.
I am sure that modern compilation with feedback is much more complicated, but this is a general idea.
Emmet
source share