I have seen many questions asking if quicksort or mergesort are better, and when to use each of them, but what I would like to see is some information on when to use them in terms of size, the data is sorted. Let's say I have several elements, whether ints or custom objects. I am sorting these items.
I see the merger in a sense as the optimal case of quick sorting (choosing the median as a bar) at every step, but with some overhead. Therefore, at a certain size, when the overhead is insignificant compared to the consistent optimal nature of the association, it would be advisable to use it in favor of quicksort.
Radix sorting has a "linear" runtime, provided that the number of digits of the keys to be sorted does not approach the number of sorted individual elements. However, radix sorting also has a relatively large constant in its runtime, as far as I know.
If I recall from some tests in the past, it makes sense to use mergesort when the number of sorted elements starts to be in millions and the radius in millions of billions.
How confident am I of these ratings? Can someone confirm, refute or correct them to some extent?
(Iβm talking about "simple" implementations of each type. In addition, in the case of radix sorting, we say that the largest single key does not exceed twice the number of sorted elements, i.e. sorting 4,000,000 elements, the largest possible key is 8,000,000 )
edit . I would like any input in the number ranges to be the fastest. I presented some in the question, and this may have been a mistake. What I would like to see in the answer is an opinion on ranges of numbers. I know that quicksort has a default value because it is usually βgood enoughβ and does not have the spatial complexity of merging and does not cause concern for malicious data that was intentionally created with indecently large keys (radix).