Disclaimer: I am not a native speaker of English or an expert in this field, I am amateur - expect inaccuracies and / or errors in subsequent ones. So, in the spirit of stackoverflow, don't be afraid to fix and improve my prose and / or my content. Also note that this is not a complete overview of automatic programming techniques ( code generation (CG) from Modeled Architectures (MDA) deserves at least a fleeting mention).
I want to add more to what Varkhan answered (which is essentially correct).
Genetic programming (GP) approach to automatic programming conflates, with its fitness functions , two different problems ("self-compilation" conceptually without problems):
- self-improvement / adaptation - a synthesized program and, if necessary, the synthesizer itself; and
- synthesis of programs .
wrt self-improvement / adaptation belongs to Jürgen Schmidhuber Goedel machines : self-referential universal problem solvers, improvements. (As a note: his work on artificial curiosity is interesting .) Autonomous systems are also important for this discussion.
wrt synthesis of programs , I think, it is possible to classify 3 main branches: stochastic (probabilistic - like the aforementioned GP), inductive and deductive.
The GP is essentially stochastic because it creates a space of probable programs with heuristics such as crossover, random mutation, gene duplication, gene removal, etc. (than he tests the program with the help of the fitness function and allow the fittest to survive and reproduce).
Inductive software synthesis is commonly known as Inductive Programming (IP), of which Inductive Logic Programming (ILP) is a subfield. That is, in general, this method is not limited to a logical synthesis program or synthesizers written in a logical programming language (both of them are limited to "..automatic demonstration or language / taxonomy learning").
IP is often deterministic (but there are exceptions): it starts with an incomplete one (for example, an input / output pair) and use this to limit the search space to probable programs that meet this specification, and then test it (the generation and testing method) or directly synthesize the program, detecting repetition in a given example, which are then generalized (data-based or analytical approach). The process as a whole is essentially a statistical induction / conclusion - that is, given what to include in an incomplete specification, to a random sample.
Preparatory and analyzed data / analytical methods . The approaches can be quite fast, so both are promising (even if little synthesized programs are still publicly demonstrated), but generation and testing (for example, GP) are awkwardly parallel , and then noticeable improvements (scaling to realistic program sizes). But note that Incremental Inductive Programming (IIP) §, which is essentially consistent, has demonstrated an order of magnitude more efficient non-incremental approaches.
§ These links are directly related to the PDF files: sorry, I can not find the abstract.
Demonstration Programming (PbD) and Example Programming (PbE) are end-user development programs that are known to actually use inductive program synthesis.
Deductive synthesis of programs start with the (supposed) full (formal) specification (logical conditions). One of the methods uses automatic theoretical proxies : for synthesizing a program, it builds a proof of the existence of an object that meets the specification; therefore, through the Curry-Howard-de Bruijn isomorphism (correspondence of subprograms as correspondence of programs and correspondence of formula form), he extracts the program from the proof. Other options include the use of constraint constraints and the deductive composition of routine libraries .
In my opinion, inductive and deductive synthesis in practice attack the same problem with two different angles, because what constitutes the full specification is controversial (in addition, the full specification today may become incomplete tomorrow - the world is not static).
When (if) these methods (self-improvement / adaptation and synthesis of programs) mature, they promise to increase the amount of automation provided by declarative programming (that such a setting should be considered "programming") is sometimes discussed ): we will focus more on Domain Engineering and Requirement Analysis and development than with the development and development of software, manual debugging, manual tuning of system performance, etc. (possibly with less random complexity compared to what was introduced using the current manual, rather than self-improvement / adaptation of the technique). It will also help to increase the level of flexibility that will be demonstrated by modern methods.