Using a simple greedy algorithm will not give any restrictions on the quality of the solution compared to OPT.
Here is a fully polynomial time (1 - epsilon) * PPEP approximation for a knapsack:
items = [...] # items profit = {...} # this needs to be the profit for each item sizes = {...} # this needs to be the sizes of each item epsilon = 0.1 # you can adjust this to be arbitrarily small P = max(items) # maximum profit of the list of items K = (epsilon * P) / float(len(items)) for item in items: profit[item] = math.floor(profit[item] / K) return _most_prof_set(items, sizes, profit, P)
We need to determine the most profitable recruitment algorithm. We can do this with some dynamic programming. But first, let's look at some definitions.
If P is the most profitable element in the set, and n is the number of elements that we have, then nP is obviously the trivial upper bound of the allowed profit. For each I from {1, ..., n} and p in {1, ..., nP} we will denote by Sip a subset of elements whose total profit is equal to p and whose total size is minimized. Then we denote by A (i, p) the size of the set Sip (infinity if it does not exist). It is easy to show that A (1, p) is known for all values โโof p in {1, ..., nP}. We will determine the repeatability for computing A (i, p), which we will use as a dynamic programming problem to return an approximate solution.
A(i + 1, p) = min {A(i,p), size(item at i + 1 position) + A(i, p - profit(item at i + 1 position))} if profit(item at i + 1) < p otherwise A(i,p)
Finally, we give _most_prof_set
def _most_prof_set(items, sizes, profit, P): A = {...} for i in range(len(items) - 1): item = items[i+1] oitem = items[i] for p in [P * k for k in range(1,i+1)]: if profit[item] < p: A[(item,p)] = min([A[(oitem,p)], \ sizes[item] + A[(item, p - profit[item])]]) else: A[(item,p)] = A[(oitem,p)] if (oitem,p) in A else sys.maxint return max(A)
A source