Question of Big O and Big Omega

I think this is probably new to the question of writing big O. Let's say, for example, I have an algorithm that recursively splits the whole list (O (n)) and then returns it back (O (n)). I guess this means that the efficiency is O (n) + O (n). Does this simplify by 2O (n), O (2n), or O (n)? From what I know about these notation, it would be O (2n) and using the rules of asymptotic notation you can reset 2, giving efficiency O (n).

If we were trying to find a lower bound, can this rule still apply? If Ω (n) + Ω (n) = Ω (2n), can you leave 2? I would not have thought, since you are actually lowering the lower bound (since n <2n).

+2
source share
4 answers
Does it simplify to 2O (n), O (2n) or O (n)? "

All of the above, but the first two are too complicated. O (n) will be the canonical designation.

2 * N is proportional to N, therefore, if the maximum cost is not more than proportional to 2 * N for a sufficiently large N (O (2 * N)), the maximum value is also not more than proportional to N for a sufficiently large N (O (N)).

If we were trying to find a lower bound, can this rule still apply?

Most definitely yes.

2 * N is proportional to N, therefore, if the minimum cost is at least proportional to 2 * N for a sufficiently large N (Ω (2 * N)), the minimum value is also at least proportional to N for a sufficiently large N (Ω (N)).


For example, let's say you have an algorithm that executes 300 ms + 100 * N ms to execute. The lower limit of its speed is Θ (N) and, therefore, Ω (N).

What if the algorithm should run twice as much? Even if it took 600 ms + 200 * N ms to execute, the lower limit of its speed is still equal to Θ (N) and, therefore, Ω (N).


The most important thing to understand in order to understand Θ (N) and the like is that these notations are used to study how well something scales. This one algorithm takes twice as much as the other, does not say anything about how well or it scales, so it should not be a surprise that they can have the same Ω () for the lower limit of their speed.

+2
source

This would simplify - usually to O (n), indicating that the amount of time spent solving the problem is proportional to the size of the problem. In this case, it may be more appropriate to write 2O (n), although this means the same thing.

+1
source

It has been a while, but I think you are right in both cases.

UPDATE

Actually it looks like this: Ω (n) + Ω (n) == Ω (n)

0
source

I suppose, according to the definition of Big-O:

If the function f (n) is <= cg (n) for some constant c and sufficiently large values ​​of n, then we can say that f (n) = O (g (n)). In your example, g (n) = n.

So, for example, if you can choose some value for c that makes f (n) <= cn for sufficiently large n, then you can say that f (n) = O (n).

Greater Omega is defined similarly.

0
source

All Articles