Time-Factor Algorithms and P / NP

It is easy to see that n! grows more slowly than almost anything, to N power (say, 100 ^ N), and therefore, if the problems are considered NP complete, and one happened on n! an algorithm that approximates the solution would be to dance Snoopy.

I have 2 questions about this situation:

  • Will there be n! Is an algorithm considered a solution in polynomial time? Factorial, of course, is not a term raised to power.
  • If you find n! The solution means that we have a pretty fast algorithm, and since n! grows faster than 2 ^ N, does this mean that for some NP-complete problems heuristic / approximation algorithms are not needed (except for unclear cases)?

Of course, these two questions rely on the first paragraph being true; if I was wrong, let me know.

+4
source share
2 answers
  • No. factorial time is not polynomial time. Polynomial time usually means an equation of the form O (N k ), where N = the number of processed elements and k = some constant. The important part is that the exponent is constant - you multiply N by a certain amount of a certain number, which does not depend on N. The algorithm with factorial complexity means that the number of multiplications is not fixed - the number of multiplications increases with N.

  • You seem to have the same problem. N 2 will be polynomial complexity. 2 N will not be. Your basic principle is also erroneous - an algorithm with factorial complexity does not mean "we have a fast enough algorithm", at least as a general rule. In any case, the conclusion is rather the opposite: the factorial algorithm can be practical in several special cases (i.e., where N is very small), but it becomes very impractical when N grows.

Let's try to put this in perspective. The binary search is O (log N). Linear Search - O (N). When sorting, the "slow" algorithms are O (N 2 ) and the "advanced" algorithms O (N log N). Factor complexity (obviously enough) O (N!).

Let's try to put some numbers in this, given (at the moment) only 10 items. Each of them will be approximately several times longer processing for 10 items instead of 1 point:

O (log N): 2
O (N): 10
O (N log N): 23
O (N 2 ): 100
O (N!): 3,628,800

At the moment, I cheated a little and used the natural logarithm instead of the logarithm of base 2, but we are only trying to evaluate the ball ratings (and the difference is a small constant factor anyway).

As you can see, the growth rate for an algorithm with factorial complexity is much faster than for any other. If we extend it to 20 points, the difference becomes even more dramatic:

O (log N): 3
O (n): 20
O (N log N): 60 - O (N 2 ): 400
O (N!): 2,432,902,008,176,640,000

Growth rate for N! so fast that they are pretty much guaranteed to be impractical, unless the number of items is due to the fact that they are quite small. For smiles, suppose that the basic operations for the above processes can be performed in a single clock cycle. Just for the sake of argument (and so that simple calculations are not taken into account), a 10 GHz processor was allowed. So, the basis is that processing one element takes .1 ns. In this case, with 20 elements:

O (log N) = .3 ns
O (N) = 2 ns
O (N log N) = 6 ns
O (N 2 ) = 40 ns
O (N!) = 7.7 years.

+32
source

It is easy to see that factorial is (approximately) exponential in behavior.

This can be (very roughly) approximated as n n (more precisely, sqrt (2 pi n) (n / e) n ).

So, if you find any particular M, where, in your opinion, M n is a good approximation, you are (probably) wrong. 269! more than 100 n and how n! will be multiplied by numbers in excess of 100, it will continue to grow faster.

+5
source

All Articles