Sounds like you're pretty new to minimization techniques. Whenever I need to learn a new set of numerical methods, I usually look at Numericical Recipes . This is a book that gives a good overview of the most common methods in this area, their compromises and (importantly) where to look for additional information in the literature. This is usually not where I am staying, but it is often a useful starting point.
For example, if your function is expensive, your goal is to minimize the number of ratings that need to converge. If you have analytic expressions for the gradient, then the gradient-based method will probably work in your interests, assuming that the function and its gradient behave well (have no features) in the area of interest.
If you don’t have analytic gradients, you are almost always better off using an approach such as a descent simplex that only evaluates a function (and not its gradients). Numerical gradients of the road.
Also note that all of these approaches converge to local lows, so they are quite sensitive to the point at which you initially start the optimizer. Global optimization is a completely different beast.
As a final thought, almost all of the code you can find to minimize will be quite efficient. The real cost of minimization is a function of value. You should spend time profiling and optimize your cost function and choose an algorithm that will minimize the number of times you need to call it (methods such as speed descent, conjugate gradient and BFGS all shine on different problems).
In terms of real code, you can find many good routines in NETLIB , in addition to the other libraries that have been mentioned. Most routines are in FORTRAN 77, but not all; to convert them to C, f2c is very useful.
sfstewman
source share