Remember that the notation is O (& middot;) or & Theta; (mid) describes the asymptotic growth rate in the limit when n approaches infinity: it describes how slowly the algorithm is achieved by multiplying the input size by 10, say. How close this corresponds to the actual runtime for the actual input sizes (which are always infinitely small compared to "infinity") depends on how much the theoretical model used to analyze the algorithm matches the actual machine you have.
In particular, "heap sort takes & Theta; (n log n) time" means that there are constants c 1 and c 2 such that for sufficiently large n , if T (n) is the time it takes to enter size n then
c 1 n log n <T (n) c 2 n log n
Size n = 1,000,000 may or may not be "large enough n" for the asymptotic behavior to be performed.
Nevertheless, if we assume that this is so, and interpretation of the expression means that the elapsed time is approximately (cn log n) for some constant c, equating
c1000000lg (1000000) = 0.3775770669 seconds
gives c ≈ 1.88 × 10 -8 . This means that an input of size n = 2,000,000 should take about 0.79 seconds, and n = 10,000,000 - about 4.38 seconds. You can compare these “theoretical” results with the experimental results obtained using an algorithm with input of this size.
A rough rule of thumb for typical computers is that c is somewhere between 10 -7 for slow computers and algorithms and 10 -9 for reasonably decent ones. Multiply a couple more of the 10 factors at both ends to be safe. (The idea is that typical analyzes give constants c somewhere, say, 1-200, and typical computers are an order of magnitude or two in speed. Of course, “typical” is subjective, and if you try it with Fibonacci, probably will be disappointed.)
Starting with the a priori assumption of about 10 -8, it would be stated that the runtime is about 0.2 seconds.