Big O for recursion function

I know that when you break down the size of the problem set to the specified fraction, you are dealing with O (log (n)). However, I am confused when it is more than 1 recursive call that does this. For example, in this function we are going to calculate the value of an indicator.

public static long pow(long x, int n) { if(n==1) return x; if(isEven(n)) return pow(x,n/2) * pow(x,n/2); else return pow(x*x, n/2) * x } 

After doing the analysis, I got a runtime equal to O (N). I'm right? thank you for your time

+5
source share
3 answers

Yes, you are right, at least in the worst case .

Note that for n = 2^k for some natural k you get this, unless you reach the stop clause, the condition is always true, and the recursive function will execute twice.

When this is established, it is enough to analyze:

 T(n) = T(n/2) + T(n/2) + X 

Where X is a constant (each recursive call takes a constant amount of work if other recursive calls are ignored).

From the argument of the main theorem 1 with:

 f(n) = X a = 2, b = 2, c = 0 (since X is in O(n^0)=O(1)) 

And since c=0 < 1 = log_2(2) conditions for case 1 apply, and we can conclude that the function T(n) is in Theta(n^log_2(2)) = Theta(n)

Average case analysis:

For the middle case with a uniformly distributed number n half of the bits in the binary representation of this number will be up (1), and half will be “down” (0).

Since dividing by 2 is basically an arithmetic shift to the right, and the condition isEven(n) true if and only if the least significant bit is 0, this means that the average complexity function:

 T(n) = 0.5 T(n/2) + 0.5 * 2 T(n/2) + X = 0.5 * 3 * T(n/2) + X = 1.5 * T(n/2) + X 

So,

 T(n) = 3/2 T(n/2) + X 

Case 1 is still applicable (provided that the constant X ):

a = 3/2, b = 2, c = 0

and you will get the average complexity of the case Theta(n^log_1.5(2))~=Theta(n^0.58)

Quick note:

This assumes that all arithmetic is O(1) . If this is not the case (very large numbers), you should put their complexity instead of a constant in the definition of T(n) and reanalyze it. Assuming that each such operation is sublinear (in the number, and not in the bits representing it), the result remains Theta(n) , since case 1 of the main theorem still applies. (For the middle case, for any function it is "better" than ~O(n^0.58) result shown

+4
source

Varun, you are partly right. Let's look at two cases:

  • If n is even, you simply divide the task into two halves without significant optimization, since pow(x, n/2) calculated twice.

  • If n is odd, then you have a special decomposition case. x will be replaced by xx, making xx a new base, which saves you from recalculating x * x.

In the first case, we have a little optimization, since it is easier to repeat smaller settings than to do all this, like:

(3 * 3 * 3 * 3) * (3 * 3 * 3 * 3) is easier to calculate than (3 * 3 * 3 * 3 * 3 * 3 * 3 * 3), so the first case slightly improves the calculation using that the fact that production is an associative operation. The number of products made in the first case is not reduced, but the way the calculation is performed is better.

In the second case, however, you have significant optimization. Suppose n = (2^p) - 1 . In this case, the problem reduces to the problem where n = ((2^p - 1) - 1) / 2 = ((2^p) - 2) / 2 = (2^(p-1)) - 1 . So, if p is a positive integer and n = (2^p) - 1 , then you reduce it to half. Thus, the algorithm is logarithmic in the best case, n = (2^p) - 1 , and it is linear in the worst case, n = 2^p .

+2
source

We usually analyze the worst time complexity that occurs when isEven (n) is true. In this case, we have

 T(n) = 2T(n/2) + O(1) 

where T (n) means the temporal complexity of pow (x, n).

Apply the master's theorem, case 1 , to get the Big-O T (n) notation form. I.e:

 T(n) = O(n) 

+1
source

All Articles