The Nelder-Mead sympathy method (proposed by Cristián Antuña in the commentary above), as is well known, is a good choice for optimizing (apparently bad behavior) functions without knowledge of derivatives (see Numerical Recipes in C, Chapter 10 ).
There are two specific aspects to your question. Firstly, these are restrictions on inputs, and the second is the problem of scaling. The following offers solutions for these points, but you may have to manually iterate between them several times until everything works.
Input restrictions
Assuming that your input constraints form a convex region (as the examples above show, but I would like to generalize it), then you can write a function
is_in_bounds(p):
Using this function, suppose that the algorithm wants to go from the point from_ to the point to , where from_ , as you know, is in this area. Then the following function will effectively find the further point on the line between two points at which it can continue:
from numpy.linalg import norm def progress_within_bounds(from_, to, eps): """ from_ -- source (in region) to -- target point eps -- Eucliedan precision along the line """ if norm(from_, to) < eps: return from_ mid = (from_ + to) / 2 if is_in_bounds(mid): return progress_within_bounds(mid, to, eps) return progress_within_bounds(from_, mid, eps)
(Please note: this function can be optimized for some regions, but it is hardly worth worrying, because it does not even call your original function of the object, which is expensive.)
One of the nice aspects of Nelder-Mead is that the function takes a series of steps that are so intuitive. Some of these points can obviously kick you out of the region, but this is easy to change. The following is an implementation of Nelder Mead with changes made between pairs of lines of the form ################################################################## :
import copy ''' Pure Python/Numpy implementation of the Nelder-Mead algorithm. Reference: https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method ''' def nelder_mead(f, x_start, step=0.1, no_improve_thr=10e-6, no_improv_break=10, max_iter=0, alpha = 1., gamma = 2., rho = -0.5, sigma = 0.5): ''' @param f (function): function to optimize, must return a scalar score and operate over a numpy array of the same dimensions as x_start @param x_start (numpy array): initial position @param step (float): look-around radius in initial step @no_improv_thr, no_improv_break (float, int): break after no_improv_break iterations with an improvement lower than no_improv_thr @max_iter (int): always break after this number of iterations. Set it to 0 to loop indefinitely. @alpha, gamma, rho, sigma (floats): parameters of the algorithm (see Wikipedia page for reference) ''' # init dim = len(x_start) prev_best = f(x_start) no_improv = 0 res = [[x_start, prev_best]] for i in range(dim): x = copy.copy(x_start) x[i] = x[i] + step score = f(x) res.append([x, score]) # simplex iter iters = 0 while 1: # order res.sort(key = lambda x: x[1]) best = res[0][1] # break after max_iter if max_iter and iters >= max_iter: return res[0] iters += 1 # break after no_improv_break iterations with no improvement print '...best so far:', best if best < prev_best - no_improve_thr: no_improv = 0 prev_best = best else: no_improv += 1 if no_improv >= no_improv_break: return res[0] # centroid x0 = [0.] * dim for tup in res[:-1]: for i, c in enumerate(tup[0]): x0[i] += c / (len(res)-1) # reflection xr = x0 + alpha*(x0 - res[-1][0]) ################################################################## ################################################################## xr = progress_within_bounds(x0, x0 + alpha*(x0 - res[-1][0]), prog_eps) ################################################################## ################################################################## rscore = f(xr) if res[0][1] <= rscore < res[-2][1]: del res[-1] res.append([xr, rscore]) continue # expansion if rscore < res[0][1]: xe = x0 + gamma*(x0 - res[-1][0]) ################################################################## ################################################################## xe = progress_within_bounds(x0, x0 + gamma*(x0 - res[-1][0]), prog_eps) ################################################################## ################################################################## escore = f(xe) if escore < rscore: del res[-1] res.append([xe, escore]) continue else: del res[-1] res.append([xr, rscore]) continue # contraction xc = x0 + rho*(x0 - res[-1][0]) ################################################################## ################################################################## xc = progress_within_bounds(x0, x0 + rho*(x0 - res[-1][0]), prog_eps) ################################################################## ################################################################## cscore = f(xc) if cscore < res[-1][1]: del res[-1] res.append([xc, cscore]) continue # reduction x1 = res[0][0] nres = [] for tup in res: redx = x1 + sigma*(tup[0] - x1) score = f(redx) nres.append([redx, score]) res = nres
Note This is a GPL implementation that is suitable for you or not. However, it is very easy to modify NM from any pseudo-code, and you can add simulated annealing in any case.
scaling
This is a more complex issue, but jasaarim made an interesting point. Once the modified NM algorithm finds a point, you can run matplotlib.contour by setting a few dimensions to see how the function behaves. At this point, you may need to scale one or more parameters and restart the modified NM.
-