How to calculate algorithm execution time?

I read about the runtime of the algorithm in some books of algorithms, where it was expressed as O(n) . For example, this code will run at O โ€‹โ€‹(n) time for the best case and O (n 3 ) for the worst case. What does this mean and how does it calculate it for its own code? This is like linear time, and it looks like each predefined library function has its own runtime, which should be borne in mind before calling it? Thanks...

+4
source share
3 answers

The Big O Notation Starter Guide may be a good place to start:

http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/

also take a look at wikipedia

http://en.wikipedia.org/wiki/Big_O_notation

there are some related questions and good answers on stackoverflow

What is a simple English explanation of the "Big O" notation?

and

Big-O for eight year olds?

+7
source

Should it be in math?

If you are trying to sort with an array of sorting bubbles, it is already sorted, then you can check if this movement through the array checks something. If not, everything is okey - we are done.

Moreover, for the best case you will have an O (n) -composition (n-1, to be precise), for the worst case (an inverse array) you will have O (n ^ 2) -selections (n โ€‹โ€‹(n-1) ) / 2, to be precise).

More complex example. Find the maximum element of the array. Obviously, you will always do n-1 fights, but how many tasks are there on average? Complex math answers: H (n) -1.

This is usually easy for your best and worst scenarios in Answerget, but on average a lot of math is required.

I would advise you to read Whip, Volume 1. But who would not?

And the formal definition:

f (n) โˆˆO (g (n)) means the existence of nโˆˆN: for all m> nf (m)

In fact, you should read the O-notation about wikis.

0
source

the notation of large O> is one of the types of asymptotic notation. Asymptotic notation is an idea from mathematics that describes the behavior of functions "in the limit" - as they approach infinity.

The problem with determining how long the algorithm to run takes is that you usually cannot give the answer in milliseconds, because it depends on the machine, and you cannot give the answer in the clock loop or as an operation counter, because it would be too specific for specific data to be useful.

A simple way to look at the asymptotic notation is that it discards all constant factors in the function. In principle, n 2 will always be greater if bn, if n is large enough (provided that everything is positive). Changing the constant factors a and b does not change this - it changes the specific value of n, where n 2 is greater, but does not change it. So, we say that O (n 2 ) is greater than O (n), and forget about those constants that we probably cannot know in any case.

This is useful because problems with large n are usually those where things slow down so much that we really don't care. If n is small enough, the time spent is small, and the benefits of choosing different algorithms are small. When n gets large, choosing a different algorithm can make a huge difference. In addition, the problems in the real world are often much greater than those with which we can easily test - and they often grow over time (for example, as databases accumulate more data).

This is a useful mathematical model that allows you to abstract from sufficiently inconvenient details to find useful results, but it is not an ideal system. We do not deal with endless problems in the real world, and there are many times when the problems are small enough for these constants to be relevant for real work, and sometimes you just need time with a clock.

MIT OCW course Introduction to algorithms is very good for this kind of thing. Videos and other materials are available for free, and the tutorial (not free) is among the best books available for algorithms.

0
source

All Articles