Why does Big-O notation use O (1) instead of O (k)?

If I understand the Big-O notation correctly, k should be a constant time for the efficiency of the algorithm. Why would constant time be considered O(1) rather than O(k) , given that it takes variable time? Linear growth ( O(n + k) ) uses this variable to shift time to the right by a certain amount of time, so why not the same for constant complexity?

+8
algorithm complexity-theory big-o notation
source share
1 answer

There is no such linear growth asymptotics O(n + k) , where k is a constant. If k were a constant and you returned to the limit of the algorithmic growth rate, you will see that O(n + k) = O(n) , because the constants fall out within.

Your answer may be O(n + k) due to the variable k , which basically does not depend on another input set n . You see that this is usually compared to moves in the analysis of sorting algorithms.

To answer the question of why we omit k in Big-O notation (which, it seems to me, does not teach well, which leads to this confusion), one definition (as I recall) of O () is as follows

formula

 Read: f(n) is in O( g(n) ) iff there exists d and n_0 where for all n > n_0, f(n) <= d * g(n) 

Let's try to apply it to our problem here, where k is a constant and, therefore, f (x) = k and g (x) = 1.

  • Are there d and n_0 to satisfy these requirements?

Trivially, the answer, of course, is yes. We choose d> k and for n> 0 the definition is satisfied.

+3
source share

All Articles