Still confused Big O notation

So, I tried to understand the Big O notation, and I can as well, but there are some more things that I'm confused about. Therefore, I continue to read that if something is O (n), this usually refers to the worst case scenario, but this does not necessarily refer to the worst case scenario, so we can say that it is better. For example, O (n) is used to sort the insert. However, I cannot understand what this means. I know that if the worst case is O (n ^ 2), this means that the function representing the algorithm in the worst case grows no faster than n ^ 2 (there is an upper bound). But if you have O (n) as the best case, how should I read it like? In the best case, the algorithm grows no faster than n? What I'm portraying is a graph with n as the upper bound, e.g.

enter image description here

If the best scenario of the algorithm is O (n), then n is the upper bound on how fast the operations of the algorithm grow in the best case, so they cannot grow faster than n ... but will not, t, which means that they can grow as fast as O (log n) or O (1), since they are below the upper bound? This doesn't make sense because O (log n) or O (1) is a better scenario than O (n), so O (n) can't be a better case? I so lost lol

+8
sorting algorithm big-o
source share
3 answers

Big-O, Big-Θ, Big-Ω are independent of the worst, medium and best.

The notation f (n) = O (g (n)) means that f (n) does not grow faster than some constant multiple of g (n). The notation f (n) = Ω (g (n)) means that f (n) grows no more slowly than some constant multiple of g (n). The notation f (n) = Θ (g (n)) means that both of them are true.

Note that f (n) here may represent the best, worst, or average run-time of a program with input size n.
In addition, “average” can have many meanings: it can mean the average input or average input size (“expected” time), or it can mean ultimately (amortized time) or both, or something else.

Often people are interested in the worst running time of the program, amortized over the entire duration of the entire program (therefore, if something initially costs n, but only 1 time for the next n elements, it averages the cost of 2 per element). The most useful thing to measure here is the smallest upper bound in the worst case; therefore, usually when you see someone asking for a Big-O program, this is what they are looking for.

Similarly, to prove a problem is inherently difficult, people may try to show that the worst (or possibly average time) work time is at least a certain amount (for example, exponential).
You should use the Big-Ω notation for them because you are looking for lower grades on them.

However, there is no particular relationship between the worst case and Big-O, or the best case and Big-Ω.
Both can be used for both: just one of them is more typical than the other.

Thus, the upper bounding best case is not very useful. Yes, if the algorithm always takes O (n) time, then you can say it O (n) in the best case, and also on average, as well as in the worst case. That a perfectly fine statement, with the exception of the best case, is usually very trivial and therefore not interesting in itself.

In addition, note that f (n) = n = O (n 2 ) - this is technically correct, since f grows slower than n 2 but it is not useful because it is not the smallest upper bound - there is a very obvious upper bound more useful than this one, namely O (n). So yes, you can say that the best / worst / average run time of the program is O (n!). This is mathematically perfectly correct. This is simply useless because when people ask for Big-O, they are interested in the smallest upper bound, not just a random upper bound.

It is also worth noting that simply being insufficient to describe the runtime of a program as f (n). Opening hours often depend on the entrance itself, and not on its size. For example, it may be that even requests are trivially easy to answer, while odd requests take a long time to respond.
In this case, you cannot just give f as a function of n - it will depend on other variables. In the end, remember that this is just a set of mathematical tools; it's your job to figure out how to apply it to your program, and to find out what an interesting thing to measure. Using tools in a useful manner requires some creativity, and mathematics is no exception.

+11
source share

Speaking informally, the best case is O (n), which means that when an input encounters certain conditions (i.e., the best for the algorithm to be executed), the number of operations performed in this best case is linear with respect to n (for example, 1n or 1.5n or 5n). Therefore, if O (n) is the best case, this usually means that in the best case it is exactly linear with respect to n (i.e., asymptotically no less and no more) - see (1). Of course, if at best the same algorithm can be proved to perform no more than c * log N operations (where c is a constant), then this algorithm of the best complexity of the case will be informally denoted as O (log N), and not as O (N), and people say this is O (log N) at best.

Formally speaking, “the complexity of the best-case algorithm is O (f (n))” is an informal and incorrect way to say that “the complexity of the algorithm is Ω (f (n))” (in the sense of Knuth's definition, see (2)) .

See also:

(1) Wikipedia "The Bahman-Landau family of notations"

(2) Whip-paper "Big Omicron and Big Omega and Big Theta"

(3) Big Omega Designation - What is f = Ω (g)?

(4) What is the difference between Θ (n) and O (n)?

(5) What is a simple English explanation of the “Big O” notation?

+3
source share

It’s easier for me to think of O() as relationships rather than boundaries. It is defined as boundaries, and therefore it is the right way to think about it, but it seems a little more useful to think about, “if I double the number / size of input for my algorithm, will the processing time double ( O(n) ), four ( O(n^2) ), etc. " Thinking about it, this makes it a little less abstract - at least for me ...

+2
source share

All Articles