What is the designation "big O"? How did you come up with numbers like O (n)?

Possible duplicate:
Common English explanation of Big O

I would suggest that it was probably something taught in classes, but as a self-educated programmer, I rarely saw it.

I collected it somehow over time, and O (1) is the best, while material like O (n ^ n) is very bad, but can someone point me to the main explanation of what it is on actually represents, and where do these numbers come from?

+7
big o
source share
3 answers

Big O refers to the worst order lead time. It is used to show how well the algorithm scales depending on the size of the data set (n-> number of elements).

Since we are only interested in order, the constant factors are ignored, and any members that grow less rapidly than the dominant member are also deleted. Some examples:

One operation or set of operations is O (1), since it takes some constant time (does not depend on the size of the data set).

O (n) loop. Each element in the data set is looped.

The nested loop is O (n ^ 2). A nested nested loop is O (n ^ 3) and on.

Things like finding a binary tree are log (n), which is harder to show, but at each level of the tree, the possible number of solutions is halved, so the number of levels is log (n) (assuming the tree is balanced).

Something like finding the sum of a set of numbers closest to a given value is O (n!), Since the sum of each subset must be calculated. This is very bad.

+4
source share

This is a way of expressing time complexity.

O(n) means for elements n in the list, to sort the list, the calculation of n is required. Which is not bad at all. Each increase in n linearly increases the time complexity.

O(n^n) bad because the amount of computation needed to do the sort (or what you do) will increase exponentially as n increases.

O(1) is the best, as it means 1 calculation to execute a function, presenting hash tables, finding a value in a hash table has O(1) time complexity.

+4
source share

A Big O remark with respect to the algorithm refers to how the execution time of the algorithm depends on the amount of input data. For example, a sorting algorithm will take longer to sort a large data set than a small data set. If for an example of the sorting algorithm you show the runtime (on the vertical axis) and the number of values ​​to sort (horizontally), for numbers of values ​​from zero to a large number, the nature of the line or curve that will be obtained will depend on the sorting algorithm used. The Big O designation is an abbreviated method for describing a line or curve.

In a large O, the expression in parentheses is a graphical function. If a variable (say n) is included in the expression, this variable refers to the size of the input data set. You say that O (1) is the best. This is true because the graph f (n) = 1 does not change with n. The O (1) algorithm takes the same amount of time to complete regardless of the size of the input data set. In contrast, the execution time of the O (n ^ n) algorithm increases with the square of the size of the input data set.

Here is the basic idea, for a detailed explanation, refer to the Wikipedia page called "Big O Notation".

+1
source share

All Articles