You can use scipy.optimize.minimize with jac=True . If for some reason this is not an option, you can see how he will cope with this situation :
class MemoizeJac(object): """ Decorator that caches the value gradient of function each time it is called. """ def __init__(self, fun): self.fun = fun self.jac = None self.x = None def __call__(self, x, *args): self.x = numpy.asarray(x).copy() fg = self.fun(x, *args) self.jac = fg[1] return fg[0] def derivative(self, x, *args): if self.jac is not None and numpy.alltrue(x == self.x): return self.jac else: self(x, *args) return self.jac
This class wraps a function that returns the value of the function and the gradient, while maintaining a singleton cache and checking if it knows its result. Using:
fmemo = MemoizeJac(f, fprime) xopt = fmin_cg(fmemo, x0, fmemo.derivative)
The strange thing about this code is that it is assumed that f always called before fprime (but not every call to f accompanied by a call to fprime ). I'm not sure that scipy.optimize actually guarantees this, but the code can be easily adapted to not make this assumption. Reliable version above ( untested ):
class MemoizeJac(object): def __init__(self, fun): self.fun = fun self.value, self.jac = None, None self.x = None def _compute(self, x, *args): self.x = numpy.asarray(x).copy() self.value, self.jac = self.fun(x, *args) def __call__(self, x, *args): if self.value is not None and numpy.alltrue(x == self.x): return self.value else: self._compute(x, *args) return self.value def derivative(self, x, *args): if self.jac is not None and numpy.alltrue(x == self.x): return self.jac else: self._compute(x, *args) return self.jac
Fred foo
source share