In short.
The effectiveness of the sorting algorithm depends on the input data and the task.
- sorting the maximum speed that can be archived is n * log (n)
- If the data contains sorted auxiliary data, the maximum speed may be better than n * log (n)
- if the data consists of duplicates, sorting can be done in almost linear time
- most sorting algorithms have their own applications
Most quick sort options have their middle case also n * log (n), but yours are usually faster than other rather than highly optimized algorithms. This is faster when it is unstable, but stable options are only a few less. The main problem is the worst case. The best random fix is Introsort.
Most merge sort options have their best, medium, and worst case, tied to n * log (n). It is stable and relatively easy to scale. BUT this requires a binary tree (or its emulation) relative to the size of the common elements. The main problem is memory. The best random fix is timsort.
Sorting algorithms also change according to the size of the input. I can make a request for beginners that there are no matches for merge sort options for entering 10T data.
source share