I do not know of any general method that will solve the problems that you talked about. Perhaps the memoization method used by Pibonacci can be used (see second section below).
In any case, sometimes we can give very fast algorithms using this problem (see sqrt (2) and sqrt (3) solution below).
Reducing such problems for a satchel might not be such a good idea as I expect, there will be other ways that will be much faster.
So, to answer your questions:
Problem using sqrt (2) and sqrt (3)
First, I will answer your second question.
f(x) = 1 for x < sqrt(2). (x >= 0 also, I presume) f(x) = f(x-sqrt(2)) + sqrt(3)
This can be solved very quickly (in O (log logn) time!), Using only integer arithmetic (which is assumed to be O (1)), expect one last step, which requires multiplication by sqrt (3) and adding 1.
Given n, we need to find the smallest m such that
n - m sqrt(2) < sqrt(2)
i.e.
n - m sqrt(2) < sqrt(2) => n < (m+1)*sqrt(2) => n * sqrt(2) < m+1
and
n - (m-1)sqrt(2) > sqrt(2) => n > m sqrt(2) => n*sqrt(2) > m.
So m is the integer part of n*sqrt(2)
and we have f (n) = (m-1) * sqrt (3) + 1.
Thus, we only need to compute [n *sqrt(2)] integer part of n*sqrt(2) .
This can be quickly calculated using the Continued fractions sqrt (2), which are rational approximations to sqrt (2), and they are, in a sense, the βbestβ approximations with given denominator values.
The continued fraction a (i) / b (i) sqrt (2) can be formed using repetition:
a0 = 1 b0 = 1 a(i+1) = a(i) +2*b(i) b(i+1) = a(i) + b(i)
It can be shown that for the approximation of [n * sqrt (2)] it suffices to consider some odd i for which b (i)> 10 * n ^ 2 (using the Liouville approximation theorem and the continued fraction theorem) and that [n*sqrt(2)] = [n*a(i)/b(i)] for this i.
Now a (i), b (i) satisfies the matrix equation
[1 2] [a(i)] [a(i+1)] [1 1] [b(i)] = [b(i+1)]
So we need to calculate the degrees of the matrix
[1 2] [1 1]
So that the entries become larger than 10 * n ^ 2.
It can be shown that the required power of the matrix is ββO (logn) and, therefore, can be calculated in O (log log n) time using only integer arithmetic (assuming that O (1)).
So the value of your function f in n can be calculated in O (log logn) time using only integer arithmetic (except for the last step, when you need to multiply the integer by sqrt (3)).
Pibonacci Number
From your comment, this is a problem
g(x) = 1 if 0 <= x < 4 g(x) = g(x-1) + g(x-pi) x >= 4
This can be solved with memoization:
Let h(m,n) = g(m - n*pi)
Then we have that
h(m,n) = h(m-1, n) + h(m, n+1)
So we have
g(m) = g(m-1) + h(m, 1)
Now you can use memoization, supporting two tables, one for g (m) and one for h (m, n). Note that even if you need to calculate h(m,n+1) , increasing n only reduces m -n * pi and will become between 0 and 4 (O (m)) within a reasonable time, so you wonβt continue forever.
This is not as good (or fast) as the solution of sqrt (2) and sqrt (3), but I believe that this makes it possible to do the calculation.
0-1 Backpack with irrational odds
Perhaps taking the best and best rational approximations as irrational, and then solving the 0-1 backpack problem for approximation, ultimately converges to the correct solution.
I assume that a fixed point in this iteration will give you a solution.
Of course, as W approximations approach O (nW), the dynamic programming algorithm may soon become exponential, and you might be better off looking at all the possibilities.