Why is O (n ^ 2) algorithm faster than O (n) algorithm on the same input?

Two algorithms say that A and B are written to solve the same problem. Algorithm A is O (n). Algorithm B is (n ^ 2). You expect Algorithm A to work better.

However, when you run a specific example of the same computer, algorithm B is faster. What are the reasons to explain how this happens?

+6
source share
5 answers

Algorithm A, for example, can be executed with a time of 10000000*n , which is equal to O(n) .
If algorithm B runs at n*n , which is O(n^2) , A will be slower for every n < 10000000 .

O(n), O(n^2) are asymptotic times describing the behavior when n->infinity

EDIT - EXAMPLE

Suppose you have the following two functions:

 boolean flag; void algoA(int n) { for (int i = 0; i < n; i++) for (int j = 0; j < 1000000; j++) flag = !flag; void algoB(int n) { for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) flag = !flag; 

algoA has n*1000000 flip flip operations, so it is O(n) , while algoB has flip operations of the n^2 flag, so it is O(n^2) .

Just solve the inequality 1000000n > n^2 , and you will get that it holds for n < 1000000 . That is, the O(n) method will be slower.

+21
source

Big-O-notation says nothing about speed itself, only about how speed will change as n changes.

If both algorithms take the same time for the same iteration, the @Itay example is also true.

+3
source

Knowing the algorithms will help give a more accurate answer.

But for the general case, I could think of several important factors:

  • Hardware related

    eg. if the slower algorithm makes good use of caching and locality or similar low-level mechanisms (see Quicksort performance compared to other theoretically faster sorting algorithms). It is worth reading about timsort , as an example where an “efficient” algorithm is used to break the problem down into smaller input sets, and on these sets a “simpler” and theoretically “more complex” algorithm is used, because it is faster.

  • Input Set Properties

    eg. if the input size is small, the efficiency will not pass in the test; also, for example, when sorting again, if the input is mostly pre-sorted or completely random, you will see different results. To obtain an accurate result in a test of this type, many different inputs should be used. Using just one example is simply not enough, because input can be designed to support one algorithm instead of another.

  • Specific implementation of both algorithms

    eg. there is a long way from a theoretical description of the algorithm to implementation; poor use of data structures, recursion, memory management, etc. can seriously affect performance.

+3
source

Although all the answers seem right so far ... none of them feel "right" in the context of the CS class. In the course of computational complexity, you want to be precise and use definitions. I will describe the many nuances of this issue and the complexity of computing in general. By the end, we will go why the Itay solution at the top is probably you should have written. My main problem with Itay's solution is that it lacks definitions that are the key to writing good proof for the CS class. Please note that my definitions may be slightly different from your class, so feel free to replace whatever you want.

When we say “ O(n) algorithm”, we really mean “this algorithm is in the O(n) ”. And the set O(n) contains all the algorithms, the worst asymptotic complexity f(n) has the property f(n) <= c*n + c_0 for some constant c and c_0 , where c, c_0 > 0 .

Now we want to prove your claim. First of all, as you stated about the problem, it has a trivial solution. This is because our asymptotic estimates are the “worst”. For many “slow” algorithms, there is some contribution that is very fast. For example, insertion sort is linear if the input is already sorted! So take insertion sort ( O(n) ) and sort ( O(nlog(n)) ) and note that insertion sorting will work faster if you go through the sorted array! Boom, proof.

But I suppose your exam meant more than "show why a linear algorithm can be faster than a quadratic algorithm in the worst case." As Alex noted, this is an open question. The essence of the problem is that runtime analysis makes assumptions that certain operations are O(1) (for example, for some problem you can assume that multiplication is equal to O(1) , although it becomes quadratically slower for large numbers (you can assert that the numbers for this problem are limited to 100 bits, so it is still "constant time")). Since your class probably focuses specifically on computational complexity, they probably want you to mask this issue. Thus, we will prove a statement assuming that our O(1) assumptions are true, and therefore there are no details such as "caching makes this algorithm faster than the other."

So now we have one algorithm that works in f(n) , which is O(n) and some other algorithm that works in g(n) , which is O(n^2) . We want to use the above definitions to show that for some n we can have g(n) < f(n) . The trick is that our assumptions did not fix c, c_0, c', c_0' . As Itai mentions, we can choose values ​​for those constants such that g(n) < f(n) for many n . The rest of the proof is what he wrote above (for example, let c, c_0 be constants for f(n) and say that they are 100, and c', c_0' be constants for g(n) , and they both are equal to 1. Then g(n) < f(n) => n + 1 < 100n^2 + 100 => 100n^2 - n + 99 > 0 => (factor to get actual bounds for n) )

+2
source

It depends on another scenario. There are 3 types of scenario 1.Best, 2.Average, 3.Worst. If you know the sorting methods, the same thing happens. For more information, see the following link:

http://en.wikipedia.org/wiki/Sorting_algorithm

Please correct me if I am wrong.

+1
source

All Articles