The problem of subset-sums and the solvability of NP-complete problems

I read about the problem of a subset of sums when I came up with what seems to be a universal algorithm for solving it:

(defun subset-contains-sum (set sum) (let ((subsets) (new-subset) (new-sum)) (dolist (element set) (dolist (subset-sum subsets) (setf new-subset (cons element (car subset-sum))) (setf new-sum (+ element (cdr subset-sum))) (if (= new-sum sum) (return-from subset-contains-sum new-subset)) (setf subsets (cons (cons new-subset new-sum) subsets))) (setf subsets (cons (cons element element) subsets))))) 

"set" is a list that does not contain duplicates, and "sum" is the sum to search for subsets. “subsets” is a list of cons cells in which “car” is a list of subsets, and “cdr” is the sum of that subset. New subsets are created from the old O (1) times, simply bypassing the element to the front.

I'm not sure if this is runtime complexity, but it seems that with each element the “sum” grows, the size of the “subsets” doubles, plus one, so it seems to me at least quadratic.

I publish this because, first of all, my impression was that NP-complete problems are usually insoluble and that the best we can hope for can be heuristic, but this seems to be a universal solution, which, let's say cycles CPUs always give the correct answer. How many other NP-complete problems can be solved, like this one?

+6
lisp np-complete subset-sum
source share
5 answers

All NP-complete problems have solutions. While you are ready to spend time calculating the answer, that is. Just because there is no efficient algorithm does not mean that it is not. For example, you can simply iterate over all potential solutions, and in the end you will get them. These problems are commonly used in real computing. You just need to be careful how big the problem you are asking yourself if you need exponential time (or, worse!) To solve this problem.

+5
source share

NP-complete problems are solvable, just not at polynomial time (as far as we know). That is, an NP-complete problem may have an O(n*2^n) algorithm that could solve it, but there will not be, for example, an O(n^3) algorithm to solve it.

Interestingly, if a fast (polynomial) algorithm were found for any NP-complete problem, then every problem in NP could be solved in polynomial time. This is what P = NP says.

If I understand your algorithm correctly (and it depends more on your comments than on the code), then it is equivalent to the O(n*2^n) algorithm here . There are 2^n subsets, and since you also need to sum each subset, the algorithm is O(n*2^n) .

Another thing about complexity - O(whatever) only indicates how well a particular algorithm scales. You cannot compare two algorithms and say that one is faster than the other based on this. Big-O notation does not care about implementation details and optimizations - you can write two implementations of the same algorithm, one of which will be much faster than the other, although both of them can be O(n^2) . One woman doing babies is an O(n) operation, but there is a chance it will take a lot longer than most of the O(n*log(n)) that you perform. All you can say based on this is that sorting will be slower for very large values ​​by n.

+7
source share

I’m not sure that the runtime is the complexity of this, but it seems that with each element the “sum” grows, the size of the “subsets” doubles, plus one, so it seems to me at least quadratic.

If the execution time is doubled for each increase in N, you are looking at the O (2 ^ N) algorithm. This is also what I would expect from visiting all subsets of the set (or all members of the force set of the set), as it would be exactly 2 ^ N members (if you included the empty set).

The fact that adding or not adding an element to all currently known sets is fast does not mean that full processing is fast.

+3
source share

What happens here could be expressed much more easily with recursion:

  (defun subset-sum (set sum & optional subset)
   (when set
     (destructuring-bind (head. tail) set
       (or (and (= head sum) (cons head subset))
           (subset-sum tail sum subset)
           (subset-sum tail (- sum head) (cons head subset))))))

The two recursive calls at the end clearly show that we are crossing a binary tree of depth n, the size of this set. The number of nodes in the binary tree is O (2 ^ n), as expected.

+2
source share

This is valid for polynomial time. Reduce using the Karp solution to solve the O (nM) problem using upper heap estimates or binary search - log (M * 2 ^ M) = logM + log (2 ^ M) = logM + Mlog2 Ergo Time: O (nM)

0
source share

All Articles