The difference between scipy.leastsq and scipy.least_squares

I was wondering what is the difference between the two scipy.optimize.leastsq and scipy.optimize.least_squares methods?

When I implement them, they give minimal differences in chi ^ 2:

 >>> solution0 = ((p0.fun).reshape(100,100)) >>> # p0.fun are the residuals of my fit function np.ravel'ed as returned by least_squares >>> print(np.sum(np.square(solution0))) 0.542899505806 >>> solution1 = np.square((median-solution1)) >>> # solution1 is the solution found by least_sq, it does not yield residuals thus I have to subtract it from the median to get the residuals (my special case) >>> print(np.sum((solution1))) 0.54402852325 

Can anyone expand this or point out where I can find alternative documentation, one of scipy is a little cryptic.

+7
optimization python numpy scipy least-squares
source share
3 answers

From the docs for least_squares , it seems that leastsq is an older shell.

see also

leastsq & emsp; Inherited shell for implementing the MINPACK Levenberg-Marquadt algorithm.

Therefore, you should just use least_squares . It seems that least_squares has extra functionality. First of all, among them is that the default β€œmethod” (ie, Algorithm) is different:

  • trf : Trust Region Reflective algorithm, especially suitable for large sparse border problems. Usually a reliable method.
  • dogbox : a dogleg algorithm with rectangular trusts, a typical use case is small border problems. Not recommended for problems with early-deficient Jacobian.
  • lm : Levenberg-Marquard algorithm implemented in MINPACK. The rare Jacobians do not process borders. Usually the most effective method for small, unconditional problems.

The default value is trf . See Notes for more information.

The old leastsq algorithm was just a wrapper for the lm method, which, as docs say, is good only for small problems without restrictions.

The difference that you see in your results may be due to the difference in the algorithms used.

+5
source share

At least_squares you can specify upper and lower bounds for each variable

There are a few more features that lesssq does not provide if you are comparing docstrings

+2
source share

The key reason for writing a new Scipy least_squares function is to provide upper and lower bounds on variables (also called "window constraints"). This was the requested feature.

This, apparently, a simple addition is actually far from trivial and requires completely new algorithms, in particular, dogleg ( method="dogleg" least_squares ) and reflective areas of trust ( method="trf" ), which provide reliable and efficient treatment (details about the algorithms are given in the links to the corresponding Scipy documentation ).

Support for large-scale problems and rare Jacobians is also important.

If the boundaries of the variables are not needed, and the problem is not very large, the algorithms in the new Scipy least_squares function have a small, if at all possible, advantage with respect to the Levenberg-Marquardt MINPACK implementation used in the old leastsq alone.

However, the same Fortran MINPACK code is called by both the old leastsq and the new least_squares with the option method="lm" . For this reason, the old leastsq now deprecated and not recommended for new code.

+2
source share

All Articles