What are the benefits of just-in-time compilation versus future compilation?

I've been thinking about this recently, and it seems to me that most of the benefits provided by JIT compilation should be more or less related to the intermediate format instead, and that jitting alone is not a very good way to generate code.

So these are the main pro-JIT compilation arguments that I usually hear:

  • Just-in-time compilation provides greater portability. Does this apply to the intermediate format? I mean, nothing prevents you from compiling your virtual bytecode into your own bytecode as soon as you receive it on your machine. Portability is a problem in the “distribution” phase, and not during the “launch” phase.
  • OK, then what about generating code at runtime? Well, the same thing. Nothing prevents you from integrating the just-in-time compiler for a real urgent need in your own program.
  • But the runtime compiles it into its own code only once in any case and saves the resulting executable file in some cache somewhere on your hard drive. Yes of course. But he optimized your program for time limits, and this did not improve it. See the next paragraph.

It does not seem that compiling ahead had any advantages. Compilation has time limits: you cannot keep the end user waiting while your program starts, so it has a trade-off that can be made somewhere. Most of the time they simply optimize less. My friend had profiling evidence that the built-in functions and manual cycle sweeps (obfuscating the source code in the process) had a positive impact on performance in his C # crunch-number program; doing the same on my side, with my C program filling in the same task that did not produce any positive results, and I believe that this is due to the extensive transformations that my compiler allowed to do.

And yet we are surrounded by jitt programs. C # and Java , Python scripts can compile into some kind of bytecode, and I'm sure a whole bunch of other programming languages ​​do the same. There must be a good reason why I am missing. So, what makes compiling just in time so superior to compiling ahead ?




EDIT To eliminate some confusion, it might be important to indicate that I am all for the intermediate representation of executables. This has many advantages (and indeed, most arguments for compilation on time are actually arguments for an intermediate view). My question is how to compile them into native code.

Most runtimes (or compilers, for that matter) prefer either to compile them accurately or on time. Since compiling up front looks like the best alternative to me, because the compiler has more time to do the optimization, I wonder why Microsoft, Sun and everyone else goes the other way around. I have little doubt about optimizing profiling, since my experience with just-in-time compiled programs reflected weak basic optimizations.

I used the C code example only because I need a compilation example ahead of the timeline . The fact that the C code was not passed to the intermediate representation is not relevant, since I just needed to show that compiling ahead could give better immediate results.

+55
compilation jit
Jan 21
source share
8 answers

The ngen page shed beans (or at least provided a good comparison of native images and JIT-compiled images). The following is a list of the benefits of executables that are compiled in advance:

  • Native images load faster because they do not have a lot of startup actions and require a static amount less memory (memory required by the JIT compiler);
  • Native images can share library code, and JIT-compiled images cannot.

And here is a list of the benefits of just-in-time executable executables:

  • Native images are larger than their bytecode,
  • Own images should be restored whenever the original assembly or one of its dependencies changes (which makes sense, since it can ruin virtual tables and the like).

And Microsoft’s general thoughts on this:

  • Large applications usually benefit from compilation ahead of schedule, while for small applications they generally do not;
  • Any call to a function loaded from a dynamic library requires the overhead of one additional jump command for corrections.

The need for image regeneration, which is performed in advance, every time one of its components is a huge disadvantage for your own images. This is the root of the fragile base class problem. For example, in C ++, if the layout of a class from a DLL that you use with your native application changes, you are screwed. If you program instead of interfaces, you are still screwed up if the interface changes. If instead you use a more dynamic language (say Objective-C), you are fine, but this is due to a performance hit.

Bytecode images, on the other hand, do not suffer from this problem and do so without increasing performance. This in itself is a very good reason for developing a system with an intermediate representation that can be easily regenerated.

+12
Jan 24 '10 at 3:36
source share
  • Greater mobility: final (bytecode) remains portable

  • At the same time, it depends more on the platform: since JIT is compiled to the same way that the code works, it can be very, very fine-tuned for this particular system. If you do this as soon as possible (and still want to send the same package all), you have to compromise.

  • Improvements in compiler technology may affect existing programs. Better C compiler does not help at all with already deployed programs. the best JIT compiler will improve the execution of existing programs. The Java code you wrote ten years ago will run faster today.

  • Adaptation to runtime labels. The JIT compiler can not only look at the code and the target system, but also about how the code is used. It can measure the current code and make decisions on how to optimize according to, for example, that the values ​​of the method parameters usually happen.

You are right that JIT adds to the starting price, and therefore there is a time limit for it, while compilation in the "front" mode can take all the time that it wants. This makes it more suitable for server applications, where the startup time is not so important and the "warm-up phase" before the code becomes very fast is acceptable.

I believe that somewhere you could save the result of the JIT compilation so that it could be reused the next time. This will give you a "leading" compilation for the second run of the program. Maybe smart people at Sun and Microsoft think the fresh JIT is good enough, and the extra complexity is not worth the trouble.

+22
Jan 21
source share

Simple logic suggests that compiling a huge program of MS Office size even from bytecodes will take too much time. You will have a great start-up time and it will scare any of your product. Of course, you can precompile during installation, but this also has consequences.

Another reason is that not all parts of the application will be used. JIT will only compile parts that care about users, leaving potentially 80% of the code intact, saving time and memory.

And finally, JIT compilation can apply optimizations that regular compilers cannot. Like embedding virtual methods or parts of methods with trace trees . Which, in theory, can make them faster.

+5
Jan 21 '10 at 1:59
source share
  • Improved reflection support. This can be done in principle in a pre-compiled program, but in practice it almost never happens.

  • Optimizations that can often be determined only by dynamically monitoring the program. For example, embedding virtual functions, evacuation analysis to include stack distribution in heap distributions and block escalation.

+4
Jan 21 '10 at 2:00
source share

One of the advantages of JIT that I don't see here is the ability to embed / optimize individual assemblies / dlls / jars (for simplicity, I'm just going to use "assemblies" here).

If your application refers to assemblies that may change after installation (for example, pre-installed libraries, framework libraries, plugins), then the compile-on-install model should refrain from applying assembly overlay methods. Otherwise, when the updated link is updated, we will need to find all such embedded bits of code when referencing assemblies in the system and replace them with updated code.

In the JIT model, we are free to embed assemblies because we take care of creating valid machine code for one run, during which the base code does not change.

+2
Nov 05 '16 at 18:33
source share

Perhaps this is due to the modern approach to programming. You know, many years ago you could write your program on a piece of paper, some other people would turn it into a pile of punch cards and download it to your computer, and tomorrow morning you will receive an emergency dump on a roll of paper weighing half a pound. All that made you think a lot before writing the first line of code.

Those days are long gone. When using a scripting language such as PHP or JavaScript, you can immediately test any changes. This is not the case with Java, although appservers give you a hot deployment. Therefore, it is very convenient that Java programs can be compiled quickly , since bytecode compilers are pretty simple.

But there are no such things as JIT languages. Forward, compilers have been available for Java for some time, and recently Mono introduced it to the CLR. In fact, MonoTouch is generally possible due to AOT compilation, since non-native apps are banned from the Apple App Store.

+1
Jan 21 '10 at 5:57
source share

I also tried to figure this out because I saw that Google was moving towards replacing its Dalvik virtual machine (essentially another Java virtual machine such as HotSpot) with Android Run Time (ART), which is an AOT compiler, but Java usually uses HotSpot. which is a JIT compiler. Apparently, ARM ~ 2x is faster than Dalvik ... so I thought, "Why is Java not using AOT?" In any case, from what I can compile, the main difference is that JIT uses adaptive runtime optimization, which (for example) allows ONLY those parts of the bytecode that are executed often to compile into native code; whereas AOT compiles the entire source code into its own code, and a smaller amount works faster than a larger amount.
I have to imagine that most Android applications consist of a small amount of code, so on average it makes sense to compile all the source code using native AOT code and avoid the overhead associated with interpretation / optimization.

+1
Mar 07 '14 at 16:54
source share

The difference between the platform-browser-speaker and the platform browser is how the angular application will be compiled. Using a dynamic platform makes angular send the Just-in-Time compiler both in the interface and in your application. This means that your application is compiled on the client side. Using a browser platform, on the other hand, causes a pre-compiled version of your application to be sent to the browser. This usually means that a significantly smaller packet is sent to the browser. angular2 bootstrap documentation https://angular.io/docs/ts/latest/guide/ngmodule.html#!#bootstrap explains this in more detail.

0
Dec 12 '17 at 9:32
source share



All Articles