How can one theoretically calculate the execution time of a certain method?

Suppose I have the Sort Heap method, and its complexity is O (nlogn). When I measure the execution time of this method on 1,000,000 inputs, I got 0.375770669 seconds. How can I calculate the runtime of this method theoretically?

+4
source share
3 answers

There is no way to calculate this theoretically. It depends on many factors, such as:

  • Java edition and major / minor / corrections.
  • Various JVM settings; for example, how big is the pile.
  • Your hardware platform; CPU, memory size, even the motherboard and clock speed.
  • The load on the car; that is, what else is he doing.
  • The amount of fluff on the processor heat sink. Seriously ... the clock speed can be reduced if the processor chip gets too hot, and the clock frequency of the oscillator of the motherboard is (a little) also sensitive to temperature.

Even if you knew all this, the calculation would essentially be a Java JIT forensic compiler and hardware execution. This is too hard to consider.

The best you can reasonably expect from a theoretical measurement of "speed" is to count abstract operations at the source code level. Even drilling and counting bytecodes executed is probably too complicated to be practical.

I want to compare between measured and theoretical.

Basically, you cannot.

+5
source

What you can do is run your code for various amounts of input, such as 1000, 10000, 100000, 1,000,000, 10,000,000, etc. and record the time spent sorting. Then draw these entries once against the # elements in the XY column and see if you get an O(nlogn) complexity curve.

Also look at this document for heap sorting analysis and demos: http://www.inf.fh-flensburg.de/lang/algorithmen/sortieren/heap/heapen.htm

+2
source

Remember that the notation is O (& middot;) or & Theta; (mid) describes the asymptotic growth rate in the limit when n approaches infinity: it describes how slowly the algorithm is achieved by multiplying the input size by 10, say. How close this corresponds to the actual runtime for the actual input sizes (which are always infinitely small compared to "infinity") depends on how much the theoretical model used to analyze the algorithm matches the actual machine you have.

In particular, "heap sort takes & Theta; (n log n) time" means that there are constants c 1 and c 2 such that for sufficiently large n , if T (n) is the time it takes to enter size n then

c 1 n log n <T (n) c 2 n log n

Size n = 1,000,000 may or may not be "large enough n" for the asymptotic behavior to be performed.

Nevertheless, if we assume that this is so, and interpretation of the expression means that the elapsed time is approximately (cn log n) for some constant c, equating

c1000000lg (1000000) = 0.3775770669 seconds

gives c ≈ 1.88 × 10 -8 . This means that an input of size n = 2,000,000 should take about 0.79 seconds, and n = 10,000,000 - about 4.38 seconds. You can compare these “theoretical” results with the experimental results obtained using an algorithm with input of this size.


A rough rule of thumb for typical computers is that c is somewhere between 10 -7 for slow computers and algorithms and 10 -9 for reasonably decent ones. Multiply a couple more of the 10 factors at both ends to be safe. (The idea is that typical analyzes give constants c somewhere, say, 1-200, and typical computers are an order of magnitude or two in speed. Of course, “typical” is subjective, and if you try it with Fibonacci, probably will be disappointed.)

Starting with the a priori assumption of about 10 -8, it would be stated that the runtime is about 0.2 seconds.

+1
source

All Articles