You might want to take a look at the work of W. Daniel Hillis from the 80s. He spent a lot of time creating sorting networks for genetic programming. While he was more interested in solving the problem of sorting a constant number of objects (16-segment sorting networks have been the main academic problem for almost a decade), it would be nice to get to know his work if you are really interested in genetic sorting algorithms.
In the evolution of an arbitrary-length list sorting algorithm, you may also be familiar with the concept of co-evolution. I built a co-evolutionary system until the point was to have one genetic algorithm developing sorting algorithms, while another GA is developing unsorted lists of numbers. The suitability of the sorter is its accuracy (plus a bonus for fewer comparisons if it is 100% more accurate), and the suitability of the list generator is how many algorithms for sorting errors when sorting its list.
In order to answer your specific question about whether the bubble has ever developed, I had to say that I will seriously doubt it, unless the function of the fitness programmer is very specific and illogical. Yes, the bubble is very simple, so maybe the GP, whose fitness function is accuracy plus the size of the program, will eventually get the bubble. However, why does the programmer choose the size instead of the number of comparisons as a function of suitability, if the latter determines the execution time?
Having asked if the GP can develop one algorithm into another, I wonder if you fully understood what a GP is. Ideally, each unique chromosome defines a unique species. A population of 200 chromosomes represents 200 different algorithms. Yes, itβs fast and the bubble may be somewhere there, but there are also 198 other, potentially unnamed methods.
Sniggerfardimungus
source share