I am still not quite happy with the accepted answer. So I return the answer:
It is theoretically possible to sort an array of n integers in the amortized complexity O (n)?
The answer to this question depends on the machine that runs the sorting algorithm. If you have a random access machine that can only work for 1 bit, you can do radix sorting for integers containing no more than k bits, which has already been suggested. Thus, you get O(kn) complexity.
But if you are working on a text computer with a fixed size with a word of at least k bits (which is all consumer computers), the best you can achieve is O(n log n) . This is due to the fact that either log n < k , or you can do a sort count first, and then sort using the O (n log n) algorithm, which will give the first case as well.
How about creating the worst case of O (n) complexity?
It's impossible. The link has already been provided. The idea of ​​the proof is that in order to be able to sort, you have to decide for each element to be sorted, if it is more or less for any other element to be sorted. Using transitivity, this can be represented as a decision tree, which at best has n nodes and log n depth. Therefore, if you want to have better performance than Ω(n log n) , this means removing edges from this decision tree. But if the decision tree is not complete, then how can you make sure that you have made the right decision about some elements a and b ?
Can you create such an algorithm without memory usage?
So, as from above, this is not possible. Therefore, the remaining questions are irrelevant.
Spacetracker
source share