Why do we usually separate the two parts in separation and subjugation algorithms?

When merging sorting, we divide into two parts when solving. Why didn’t we divide it into 3 or more parts? In the same way, in many problems with separation and victory, I saw how one is divided into two parts. Why not 3 or more parts? What impact will this have on the decision / complexity?

+6
source share
4 answers

You can. But that would not matter. Separation gives complexity log(we will talk about this in general).

To be precise, you get log_2, but generalized to log, because constant factors do not matter when analyzing complexity ( Big-O-Notation Wikipedia ). And you can always convert log_ato log_busing only a constant factor :

log_a(x) = log_b(x) * (1 / log_b(a))

On a more specific level, you get a constant ratio 2 somewhere when divided into two halves. If you now divide by 4, you replace it 2, but in any case it will be just a change in the constant factor.


In fact, dividing into more than two parts is often done in practice using parallel or distributed programming .

divide and conquer - , . .

, .

+1

3 . , , , , . .

D & C, , . . ( ), quicksort.

+3

.

"". , n, . .

, , n~10^6, 10^6 * t , t - , . log(10^6) * t . log, "", .

.

   base                log 10^6 in base

    2                   20
    3                   12.6
    4                   10

.

, , , , 10 ^ 6 , 20 12 10, , 10 ^ 6 (O (n) -), .

, , 3 . , . , , , . , , .

, , .

+1

. - big-O.

, , , ( - , ). , , , .

, , , , : https://softwareengineering.stackexchange.com/questions/197107/divide-and-conquer-algorithms-why-not-split-in-more-parts-than-two

, !

, ,:

“On the other hand, some tree-like data structures use a high branching ratio (much more than 3, often 32 or more), although usually for other reasons. This improves memory usage hierarchy: data structures stored in RAM make better use of cache, data structures, stored on the disk, require less reading HDD-> RAM. "

0
source

All Articles