What is the advantage of compiler compilation planning over dynamic planning?

Superscalar RISC-cpus currently generally supports out-of-order execution, with branch prediction and speculative execution. They plan work dynamically.

What is the advantage of scheduling compiler comparing over dynamic scheduling of a processor out of order? Does stationary planning during compilation matter for the processor out of order, or only for simple processors in order?

It seems that at present, most software planning work focuses on VLIW or simple processors. The GCC wiki scheduling page also shows little interest in updating gcc scheduling algorithms.

+3
source share
2 answers

The advantage of static (compiler) planning:

  • The lack of time limits, so they can use very complex algorithms;
  • No restrictions in the instruction window. This allows, for example, to exchange instructions with the whole cycle of the function call.

Dynamic advantage (CPU scheduling):

  • Take care of the real environment (cache, arithmetic block is busy due to another hyper-thread);
  • Do not try to recompile the code for each architecture update.

That is all I can think of now.

+3
source

First, I must note that the current RISC architectures first compile and then re-plan, so the β€œhigh-level” build commands are compiled into smaller RISC commands. At least this is true for x86 / x64 architectures.

Then we can imagine the execution cycle as: compile - optimize / migrate - reduce the scale - compile - optimize / migrate.

This answer answers the question, the compiler has a much wider scope of visibility in the application, so it is mainly optimized at the macro level (application command blocks), while the processor is mainly optimized at the micro level (RISC command blocks).

0
source

All Articles