Time complexity calculation

I am currently working with several exam questions and am stuck at this point. I was given that the Quicksort algorithm has a time complexity of O(nlog(n)) . For a certain input size, the list sorting time is 4 minutes. The question continues to figure out how long it will take to sort the list twice by size in the same system.

I already ruled out that the time is not 8 minutes (twice the size of the input = twice as much, very wrong reasoning).

Some work that I have done:

Worker A

  • 4 = nlog(n)
  • 4 = log(n^n)
  • 16 = n^n
  • Mostly I'm stuck.

Job B

  • X = 2nlog(2n) >> 2n since double entry
  • X = 2n(1 + log(n))
  • X = 2n + 2nlog(n) >> nlog(n) was 4 minutes
  • X = 2n + 2(4) = 2n + 8
  • I am stuck again at this point.
+7
math algorithm time-complexity
source share
6 answers

I think the first thing to pay attention to this problem is that since it took 4 minutes to sort the numbers, n should be pretty big. For example, I just used quicksort to sort a billion numbers on my computer, and it took less than 3 minutes. So n is probably about 1 billion (give or take order).

Given that n is huge, it is likely that this runtime will correspond to c*n*lg(n) for some constant c , since conditions of a lower order of extension of execution should not be too relevant for such a large n . If we double n , we get the following runtime factor compared to the original runtime:

 [Runtime(2n)] / [Runtime(n)] [c * (2n) * lg(2n)] / [c * n * lg(n)] 2 * lg(2n) / lg(n) 2 * log_n(2n) 2 * (1 + log_n(2)) 

Here lg() is the logarithm under an arbitrary base, and log_n() is the log base n .

First, since we assumed that n large, one possible way to continue would be to approximate log_n(2) as 0, so the runtime factor would be approximate as 2, and the total duration would be approximate as 8 minutes.

Alternatively, since we probably know n up to an order of magnitude, another possibility would be to approximate the multiplier for the probable value of n :

  • If n = 100 million, then we will approximate the factor as 2.075, and the total execution time is 8.30 minutes.
  • If n = 1 billion, then we will approximate the factor as 2.067, and the total execution time is 8.27 minutes.
  • If n = 10 billion, then we would approximate the multiplier as 2.060, and the total lead time is 8.24 minutes.

Note that the huge changes in our approximation n give fairly small changes in the approximation of the total execution time.

It is worth noting that, although this looks good on paper, in practice, architectural considerations can lead to the fact that the actual battery life will be very different from those that we calculated here. For example, if the algorithm causes a bunch of paging after doubling the size of the data, the execution time can be much longer than the approximately 8 minutes we approximated here.

+6
source share

It is impossible to calculate the absolute time without knowing the value of n.
Take this through some empirical meanings.
Suppose that 'k' is the time taken for one operation.

 If, n = 2, knlog(n) = 4 => k.2.1 = 4 => k = 2 if n is doubled, k.2n.log(2n) = 2.4.2 => 16 minutes If, n = 4, knlog(n) = 4 => k.4.2 = 4 => k = 1/2 if n is doubled, k.2n.log(2n) = 1/2.8.3 => 12 minutes If, n = 64, knlog(n) = 4 => k.64.6 = 4 => k = 1/96 if n is doubled, k.2n.log(2n) = 1/96.128.7 => 9.33 minutes 

As n increases, the time taken by the time approaches double the time (8 minutes)

+5
source share

The information provided is incomplete .

Evidence:

Let the algorithmic complexity O(nlogn) . That means time, t = c*nlogn .

Therefore, we have the following equations:

  • 4 = c*n*logn
  • t = c*(n2)*log(n2) , where t is the necessary answer
  • n2 = 2*n2

The number of variables = 4 ( n , n2 , t , c )
The number of unique equations = 3
Since we need at least 4 equations for 4 variables, the information provided is incomplete.

+5
source share

This sounds like an absolutely terrible exam question, probably written by someone who doesn't have a deep understanding of what the Big-O inscription actually means. There are many errors in this, many of which have already been discussed in other answers.

The biggest problem is that the Big-O notation does not give you a direct relationship to real time. It gives a huge amount of information that will be required to answer the question asked.

Other answers here indicate that the question does not give you any indication of how many elements were in the original set of inputs, only that there are twice as many of them in the second set, and this information is crucial for giving the answer. But there are a couple of things that they did not talk about ...

First, Big-O ignores the overhead of the algorithm. It may be that the algorithm used takes 3.5 minutes, regardless of the number of inputs included in it, and for the initial set of inputs, the actual processing time was only about 30 seconds. This will seriously affect the calculation of the time spent on any arbitrary number of inputs.

But as bad as this omission, Big-O accepts it even more.

Check out this quote from Wikipedia :

In typical use, the formal definition of O notation is not used directly; rather, the notation O for the function f is derived by the following simplification rules:

  • If f (x) is the sum of several members, then with the highest growth rate it is preserved, and all the others are omitted.
  • If f (x) is the product of several factors, then any constants (terms in the product independent of x) are omitted.

This means that timing can include several terms that are ultimately discarded. What if the algorithm takes c * (n + n * log(n)) to complete, without overhead? In the Big-O entry, this is O(nlogn) .

The only answer that is really possible for the exam is "some time more than 4 minutes." We cannot know anything more than without additional information. In particular:

  • What is overhead?
  • What is the time cost of one operation?
  • How many subjects are we talking about?
  • What other conditions have been canceled?
+2
source share

I like the @Amitoj reasoning, but I would generalize it.

Let n0 = the number of elements that lead to a runtime of 4 minutes, and n1 = 2 * n0 . Then we have

 c = 4 mins / (n0 * log n0) 

We are trying to find

 t = c * n1 * log n1 = 4 mins / (n0 * log n0) * n1 * log n1 = 4 mins * (n1 / n0) * (log n1 / log n0) 

n1 / n0 always = 2.

For n0 => infinity, the limit log n1 / log n0 is 1.

So yes, when n0 gets larger, the limit of t is 4 mins * 2 = 8 mins .

+1
source share

All answers except Anmol Singh Jaggi are incorrect.

First of all, it is easy to see that this information is not enough to get an answer . And that's why:

All you have to do is solve the equation. If the time complexity of your algorithm is O (n logn), then your first equation:

enter image description here

where n is the size of your list. If they want you to find how long it takes you to finish the algorithm twice as large, they basically want to find x :

enter image description here

So, basically you need to solve a system of two equations with three unknowns. This is either 0 answers (not in our case), or an infinite number of answers.


Now you have to make an assumption about your c1. If c1 = 1, then

enter image description here

Substituting n into the second equation, you get: x = 13.5 . So 13 and half a minute.


But again, we got this answer on the assumption that c1 is 1, if you have another constant factor, you will get another answer.

+1
source share

All Articles