Yes, you are right, at least in the worst case .
Note that for n = 2^k for some natural k you get this, unless you reach the stop clause, the condition is always true, and the recursive function will execute twice.
When this is established, it is enough to analyze:
T(n) = T(n/2) + T(n/2) + X
Where X is a constant (each recursive call takes a constant amount of work if other recursive calls are ignored).
From the argument of the main theorem 1 with:
f(n) = X a = 2, b = 2, c = 0 (since X is in O(n^0)=O(1))
And since c=0 < 1 = log_2(2) conditions for case 1 apply, and we can conclude that the function T(n) is in Theta(n^log_2(2)) = Theta(n)
Average case analysis:
For the middle case with a uniformly distributed number n half of the bits in the binary representation of this number will be up (1), and half will be “down” (0).
Since dividing by 2 is basically an arithmetic shift to the right, and the condition isEven(n) true if and only if the least significant bit is 0, this means that the average complexity function:
T(n) = 0.5 T(n/2) + 0.5 * 2 T(n/2) + X = 0.5 * 3 * T(n/2) + X = 1.5 * T(n/2) + X
So,
T(n) = 3/2 T(n/2) + X
Case 1 is still applicable (provided that the constant X ):
a = 3/2, b = 2, c = 0
and you will get the average complexity of the case Theta(n^log_1.5(2))~=Theta(n^0.58)
Quick note:
This assumes that all arithmetic is O(1) . If this is not the case (very large numbers), you should put their complexity instead of a constant in the definition of T(n) and reanalyze it. Assuming that each such operation is sublinear (in the number, and not in the bits representing it), the result remains Theta(n) , since case 1 of the main theorem still applies. (For the middle case, for any function it is "better" than ~O(n^0.58) result shown