Cloning a link is still at least as expensive as cloning a primitive.
Most (or all?) Java implementations implement an array of objects as an array of pointers (references) to objects. Thus, cloning an array of pointers (links) will consume less space than cloning the objects themselves if the objects are larger than the pointer (link).
I do not know why the term "cloning" was used. Merge sort allocates a second temporary array, but the array is not a "clone" of the original. Instead, the correct type of merge alternates between the direction of the merge from the original to the tempo or from temp to the original, depending on iteration from bottom to top or depending on the recursion level for top to bottom.
double summary sorting
Based on what I can find in a web search, Java double pivot quicksort tracks “recursions” and switches to sorting the heap if the recursion depth is excessive to maintain O (n log (n)) time complexity, but with a higher cost ratio.
quick sort versus merge sort
In addition to stability, merge sorting can be faster for sorting an array of pointers (links) to objects. Merge sorting does more moves (pointers), but fewer comparisons (objects dereferencing pointers access) than quick sorting.
In a system with 16 registers (most of them are used as pointers), for example X86 in 64-bit mode, sorting with 4-way merge is about as fast as regular quick sorting, but I don’t remember to see 4 -way merge sort in a shared library, at least not for PC.
rcgldr
source share