How do modern optimizing compilers determine when to optimize?

How do modern optimizing compilers determine when to apply certain optimizations, such as loop unwrapping and code inlining?

Since both of these aspects affect caching, naively inline functions with less than X lines, or any other simple heuristics, are more likely to produce worse executable code. So how do modern compilers handle this?

It's hard for me to find information about this (especially information that is easy enough to understand ..), about the best I could find, this is a wikipedia article . Any details, links to books / articles / articles are welcome!

EDIT: since the answers mostly concern the two optimizations that I mentioned (embedding and loop reversal), I just wanted to clarify that I am interested in all and any compiler optimizations, not just these two. I am also interested in optimizations that may be performed during compilation in the future, although JIT optimization is also of interest (albeit to a lesser extent).

Thanks!

+7
optimization compiler-optimization gcc compiler-construction
source share
4 answers

Usually, being naive, and I hope this is an improvement.

That's why just-in-time compilation is such a winning strategy. Collect statistics, then optimize the overall case.

Literature:

+5
source share

You can see the Spiral project.

In addition, optimization is a complex task as a whole. This partly explains why there are so many options for the gcc compiler. If you know something about cache and pages, you can do something manually and ask others to do it through the compiler, but neither of the two machines will be the same, so the approach should be adhoc.

+1
source share

In short: better than us!

You can look at this: http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf

Didier

+1
source share
Good question. You ask about the so-called speculative optimization.

Dynamic compilers use both static heuristics and profile information. Static compilers use heuristics and (off-line) profile information. The latter is often referred to as PGO (profiled optimizations).

There are many articles on policy development. Most complete is

Empirical study of the Inline method for the Just-In-Time Java compiler

It also contains links to related work and harsh criticism of some of the reviewed (substantiated) articles.

In general, modern compilers try to use impact analysis to assess the potential effect of speculative optimization before applying them.

PS Loop Sweep is an old classic material that only helps for some hard loops that only run the number of crunchng ops (no calls, etc.). The inline method is a much more important optimization in modern compilers.

+1
source share

All Articles