How to calculate the exact complexity of the algorithm?

Without resorting to asymptotic notation, is the tedious step counting the only way to get the time complexity of the algorithm? And without the number of steps of each line of code, can we come to represent a large number O of any program?

Details: an attempt to find out the complexity of several numerical analysis algorithms in order to decide which one is best suited to solve a specific problem. For instance. - due to the Regulus-Falsey or Newton-Rapson method for solving equations, the intention is to evaluate the exact complexity of each method and then decide (by setting the value to "n" or any other arguments), which is less complicated.

+5
source share
3 answers

The only way is not the "easy" or difficult way, but the only reasonable way is to find the exact complexity of a complex algorithm - this is to profile it. The current implementation of the algorithm has a complex interaction with numerical libraries and with the processor and its floating point. For example, accessing cache memory is much faster than accessing cached memory, and there may also be more than one cache level. Counting steps is indeed much more suitable for asymptotic complexity, which, in your opinion, is not enough for your purpose.

, . (, "bloof ++;" C) , .

, f (n) * (1 + o (1)), . , n ^ 2 + 2 * n + 7 n ^ 2 * (1 + o (1)). , O (f (n)), .

+5

" " - . n , , .

, , .

+2

. - - - , ( "n" ), .

, . , , , . , , .

To summarize, the most efficient non-linear solver always depends on the problem you are solving. If the many problems you solve are very limited, doing a few experiments with different solvers and measuring the number of iterations and CPU time is likely to give you more useful information.

0
source

All Articles