The main question is: why aren’t the general or even specialized optimizers of the whole program part of our daily lives?
I started thinking about this after reading Supercomputers, LLC White Paper , which discusses their method of “supercompiling” or metacompiling the source of a program to (usually) achieve a faster version that performs the same functions as the original program. in fact, they go through the execution of the program and are recompiled into the same target language, while natural optimizations occur; for example, the general binary search function can be specialized for binary search in an array of 100 elements if the program is ode often uses arrays of 100 elements.
Partial Evaluation is perhaps the narrower type of optimization of the entire program, where the source of the program is reduced / evaluated based on some fixed set of input, leaving the unknown input open for evaluation at runtime. For example, the generic function x ^ y, if y = 5 is given, can be reduced to x ^ 5 or perhaps something like (x * x) * (x * x) * x.
(I apologize for my rude descriptions of these two methods)
Historically, optimizing the entire program, such as the one above, would be too intense for working with memory, but with our machines there are gigabytes of memory (or using something like a cloud), why we have not seen many partial open-source evaluators and how spring up? I have seen some, but I would have thought that this would be a regular part of our tool chain.
- ( , )?
- ?
(.. -
I/O
, )?
- ,
?
- ?