Big-O: How do you know which algorithm should be used for a specific time complexity?

So, when someone asks you to give an O (n) or O (nlogn) algorithm to calculate something, how do you know what to answer? It seems that the only way to answer this question is to know the time complexity of various algorithms in advance, and not to come up with something in place. Am I accepting this right?

+6
algorithm big-o
source share
6 answers

simple cases:

List Iteration - O (n)

The operation with one binary or tree is O (log n) (which means that n elements in the tree have n log n)

Operation Hashtable - O (1)

NP is a complete problem (here you need some experience to develop intuition) - exponential time in general (in practice, an effective heuristic can be applied for many problems)

for a graph with E edges and V vertices, the complexity F (E, V) depends on the nature of the algorithm and the density of the graph. E + V for good DFS / BFS.

Almost every algorithm can be broken down into many higher (or similar) small blocks. When blocks are combined recursively, e.g. A-includes-B, complexity is multiplied when blocks follow each other, e.g. A-then-B, complexity max (complexity A, complexity B)

+11
source share

You are right, you need to know the time complexity for different algorithms in order to know this. You need to know the time complexity for sorting, searching for items in a dictionary, in a hash table, finding a union, flow graphs, DFS, BFS, minimal spanning trees, etc. These are the basics.

An introduction to algorithms should be well covered.

+2
source share
  • Think very hard.
  • Come up with an algorithm for the problem
  • Analyze it to determine its temporal complexity (large-O).
  • If time complexity is what you were asked to produce, you did.
  • Else goto 1.

Seriously, however, you need to know the complexity of the algorithms for common problems like iterating, searching, sorting, searching hash tables, etc. For example, it is very useful to know that a simple sorting algorithm such as Bubble Sort is O (n ^ 2) and Quick Sort is O (n log n) in the average case. It is also important to be able to quickly analyze the algorithm to determine its complexity, and see how it can be changed to make it faster.

+1
source share

It is not too far from the truth. There are several systematic methods, but they are difficult, and even then they usually get a "shortcut" at some point.

Big-O gives an upper bound. If someone asks you about an algorithm and you don’t know, you can say O (n ^ n) and probably be correct (although there are even slower algorithms there). Of course, this is not tight.

A shortcut is basically a source of inspiration when you see a template that proves a certain upper bound.

In 99% of cases, most people simply use their intuition to find a good way to look at the algorithm, and then do enough to prove this attachment. For example, instead of looking at the actual thread of execution, it is customary to say that "each element is processed no more than x times, each time, taking a constant time" (for the O (n) algorithm). You may have missed the fact that, for example, in most cases log n is always processed, but if you are really trying to understand what the algorithm does, this is hardly possible.

Of course, this probably will not help you take a course in algorithms.

For systematic methods - well, of course, there are videos of the course "MIT 6.046J / 18.410J Introduction to Algorithms", which can be viewed on YouTube. The lecturer is one of the authors of a very respected algorithm .

+1
source share

It may be more academic than what you are looking for, but it is worth knowing that for certain problems you can prove that any algorithm that solves it will not be better than any lower bound. For example, sorting sorting will not be better than O (n log n) . Another example is the Hanoi Towers, any solution of which must be exponential due to how the problem is built.

Some discussion of the lower bounds in mathoverflow ... related it for completeness, I don't know how much this would be a practical reading: D

In any case, sometimes the right question is: ask, can this be done at a given time?

In addition, like everyone else, there is a working knowledge of what algorithms are used to solve these problems. Bob has some good examples. (Just note that hash table operations are not always guaranteed to be O (1), but they are expected to be.)

0
source share

The way I do this is a little more pragmatic.

  • Get an algorithm - either you already know the algorithm, or you will find it on the Internet, or you can create it yourself.
  • Algorithm emulation in your mind
  • Increase the size of your kit (n) by gazillion
  • Repeat this in your mind, will the time in the gas furnace increase? then this is O (n), it increased by gazillion * log gazillion, then this is O (n log n)

The only other way that I know about is that you really find the theoretical complexity of your problem and perform an asymptotic analysis.

0
source share

All Articles