In the βnormalβ merge sort, you divide the array by 2 until you reach the depth of log 2 n , and then start the merge. Each merge of two arrays of size m also takes 2m operations.
This will lead you to the following formula (when analyzing time):
n / 2 * 2 + n / 4 * 4 + ... 1 * n = n * log 2 n
Now, if you do a three-way merge, you divide the array by 3. The difference with the previous method is twofold:
- The division depth is now
log 3 n . - During a merge, instead of comparing two elements, you need to find at least 3 elements.
This means that in the most basic implementation, you get the following formula:
n / 3 * 2 * 3 + n / 9 * 2 * 9 + ... 1 * 2 * n = 2 * n * log 3 n
Note that 2 is multiplied because the search for a minimum of three elements consists of two operations.
Asymptotically, these two are Ξ(nlogn) . However, it is possible (I have not tried) in practice, three-way sorting will give better performance due to its log 3 n . However, since log 2 n for n = 1,000,000 is only 20, and log 3 n for the same number is 12.5, I doubt that this optimization will be really effective if n not big enough.
With smart implementation, k-way merging can really have a good effect on merge sort. The idea is that as soon as you find a minimum of k elements, you already know the relationship between the other elements of k-1 , which are not minimal. Therefore, consuming this minimal element from your corresponding list, you only need to compare the new value of this list and find its order in relation to the other elements of k-1 . Using a bunch, that would be pretty trivial.
Be sure to check Jerry's answer . I agree with him that the true power of multipath merging is related to working with multiple disks and parallel processing.