Why is Fibonacci Sequence Big O (2 ^ n) instead of O (logn)?

I took discrete mathematics (in which I learned about the main theorem, Big Aunt / Omega / O) a while ago, and I seem to have forgotten the difference between O (logn) and O (2 ^ n) (not in the theoretical meaning of Big O ) I usually understand that algorithms like merge and quick sort are O (nlogn), because they repeatedly divide the original input array into auxiliary arrays until each additional array has size 1 before recursing the tree back up, providing a recursion tree with a height of logn + 1. But if you calculated the height of the recursive tree using n / b ^ x = 1 (when the subtask size became 1, as indicated in the answer here ), it seems that you always get that the height of the tree is log (n).

If you solve the Fibonacci sequence using recursion, I would think that you would also get a tree with size logn, but for some reason Big O of O (2 ^ n) algorithm. I thought that maybe the difference is that you have to remember all the fiber numbers for each subtask in order to get the actual number of chips, meaning that the value in each node should be called, but it looks like when merging sort, the value of each node should be used (or at least sorted). This, unlike binary search, however, when you visit only certain nodes based on comparisons made at each level of the tree, so I think that is where confusion arises.

In particular, what leads to the fact that the Fibonacci sequence has a different time complexity than algorithms like merge / quick sort?

+6
source share
6 answers

Other answers are correct, but they donโ€™t make it clear - where is the big difference between the Fibonacci algorithm and the separation and subjugation algorithms? Indeed, the form of the recursion tree is the same for both classes of functions - it is a binary tree.

The trick to understanding is actually very simple: consider the size of the recursion tree as a function of input size n .

Recall some basic facts about binary trees :

  • The number of leaves n is a binary tree equal to the number of non-leaf nodes plus one. Therefore, the size of the binary tree is 2n-1.
  • In an ideal binary tree, all non-leaf nodes have two children.
  • The height h for an ideal binary tree with leaves n is log(n) for a random binary tree: h = O(log(n)) and for a degenerate binary tree h = n-1 .

Clearly:

  • To sort an array of elements n with a recursive algorithm, the recursion tree has n exits . It follows that the width of the tree is n , the height of the tree is O(log(n)) on average, and O(n) in the worst case.

  • To calculate an element of a sequence Fibonacci k with a recursive algorithm, the recursion tree has k levels (to understand why, consider fib(k) calls fib(k-1) , which calls fib(k-2) , etc.). It follows that the height of the tree is k . To evaluate the lower bound on the width and number of nodes in the recursion tree, keep in mind that since fib(k) also calls fib(k-2) , therefore there is an ideal binary tree of height k/2 in the recursion tree. If it is removed, this ideal subtree will have 2 k / 2 leaf nodes. Thus, the width of the recursion tree is at least O(2^{k/2}) or, equivalently, 2^O(k) .

The most important difference is that:

  • for division and conquest algorithms, the input size is the width of the binary tree.
  • for the Fibonacci algorithm, the input size is the height of the tree.

Therefore, the number of nodes in the tree is O(n) in the first case, but 2^O(n) in the second. The Fibonacci tree is much larger compared to the size of the input.

You mentioned the Master Theorem ; however, the theorem cannot be applied to analyze the Fibonacci complexity, since it only applies to algorithms where the input is actually divided at each recursion level. Fibonacci does not share the entrance; in fact, functions at level i produce almost twice as much input for the next level i+1 .

+7
source

To solve the gist of the question, that is, โ€œwhy Fibonacci and notโ€œ Mergesort โ€, you should focus on this key difference:

  • The tree you get from Mergesort has N elements for each level and there are log (n) levels.
  • The tree you get from Fibonacci has N levels due to the presence of F (n-1) in the formula for F (N), and the number of elements for each level can vary greatly: it can be very low (near the root or near lowest leaves) or very high. This, of course, is associated with the recalculation of the same values.

To understand what I mean by โ€œre-computationโ€, look at the tree to compute F (6):

fib (6) computation tree
Fibonacci tree image: http://composingprograms.com/pages/28-efficiency.html

How many times do you calculate F (3)?

+4
source

Consider the following implementation

 int fib(int n) { if(n < 2) return n; return fib(n-1) + fib(n-2) } 

Denote by T (n) the number of operations that fib performs to calculate fib(n) . Since fib(n) causes fib(n-1) and fib(n-2) , this means that T (n) is at least T(n-1) + T(n-2) . This, in turn, means that T(n) > fib(n) . There is a direct formula fib(n) , which is some constant of degree n . Therefore, T (n) is at least exponential. Q.E.D.

+3
source

With recursive algo, you have approximately 2 ^ N operations (additions) for fibonacci (N). Then it is O (2 ^ N).

With cache (memoization) you have about N operations, then this is O (N).

Algorithms with complexity O (N log N) are often a conjunction of iteration over each element (O (N)), split recursion and merge ... Divide by 2 => you make log N recursions.

+2
source

The complex time complexity of sorting is O (n log (n)). The quickest sort order is O (n log (n)), the worst case is O (n ^ 2).

Other answers explain why a naive recursive Fibonacci is O (2 ^ n).

If you read that Fibonacci (n) can be O (log (n)), this is possible if computed using iteration and re-squaring using the matrix method or the Lucas sequence method. Sample code for the Lucas sequence method (note that n is divided by 2 for each cycle):

 /* lucas sequence method */ int fib(int n) { int a, b, p, q, qq, aq; a = q = 1; b = p = 0; while(1) { if(n & 1) { aq = a*q; a = b*q + aq + a*p; b = b*p + aq; } n /= 2; if(n == 0) break; qq = q*q; q = 2*p*q + qq; p = p*p + qq; } return b; } 
+2
source

As far as I understand, the error in your reasoning is that using a recursive implementation to evaluate f(n) , where f stands for the Fibonacci sequence, the input size is reduced by 2 times (or some other factor), which is not the case. Each call (except for the "base cases" 0 and 1) uses exactly 2 recursive calls, since there is no way to reuse previously calculated values. In light of the presentation of the main theorem on Wikipedia, repetition

 f(n) = f (n-1) + f(n-2) 

- a case for which the master theorem cannot be applied.

+1
source

All Articles