This is simply because it has been proven that it is theoretically impossible (at least for the right programs).
Suppose you have infinite computational power, discarding the size and slowness of the search space and other “speeds”. Now you are faced with only two problems: - Will the generated program stop? - How to be sure that the generated program behaves the way I want?
The first problem is the central question of computability theory and is called the stop problem . Turing proved in 1936 that this problem is unsolvable for all program entry pairs. This means that this is possible in some cases, but not for everyone. There is no automated process that can determine if a program is stopped or not. Therefore, for this you can not do much;)
The second problem is related to the correctness of the program. In genetic programming, validation is usually done using approximate values that do not constitute any evidence of correctness. This is comparable to unit testing, gives approval for a number of examples, but is not general proof. For example, if I write a program for checking prime numbers, test it only with 3 5 7 and 11, then a program that returns true for each odd number will test.
The next step is to use automatic evidence. The automatic proof of the correctness of the algorithms is in fact deeply connected with the mathematical automatic proof of the theorem. You describe your program using an axiomatized system and try to automatically confirm the correctness of your statement. Here again, you come across strong theoretical barriers, which are Gödel's incompleteness theorems . These theorems state, among other things, that for even very simple axiomatized systems capable of performing arithmetic on natural numbers, there is no algorithm (called an effective procedure) capable of proving all theorems regarding these natural numbers. Specifically, this means that even for simple programs you cannot prove your case.
Even without proven correctness, using test cases to test a genetic program is very prone to over-specialization, a phenomenon known as retraining in machine learning. That is, the learned program will perfectly cope with the provided test examples, but it can become absolutely ballistic for some other inputs.
Julien Feb 17 '13 at 11:50 2013-02-17 11:50
source share