Why is it not called Dfunc (gradient) when using integrate.odeint in SciPy?

Can anyone provide an example of providing a jacobian function to integrate.odeint in SciPy ?. I am trying to run this code from the SciPy odeint example tutorial , but it seems like Dfunc (gradient) is never called.

 from numpy import * # added from scipy.integrate import odeint from scipy.special import gamma, airy y1_0 = 1.0/3**(2.0/3.0)/gamma(2.0/3.0) y0_0 = -1.0/3**(1.0/3.0)/gamma(1.0/3.0) y0 = [y0_0, y1_0] def func(y, t): return [t*y[1],y[0]] def gradient(y,t): print 'jacobian' # added return [[0,t],[1,0]] x = arange(0,4.0, 0.01) t = x ychk = airy(x)[0] y = odeint(func, y0, t) y2 = odeint(func, y0, t, Dfun=gradient) print y2 # added 
+8
python scipy odeint
source share
1 answer

Under the hood, scipy.integrate.odeint uses the scipy.integrate.odeint solver from the ODEPACK FORTRAN library . To deal with situations where the function you are trying to integrate is stiff , LSODA adaptively adapts between two different methods of calculating the integral - the Adams method , which is faster but unsuitable for hard systems, and BDF , which is slower but more resistant to stiffness.

The particular function you are trying to integrate is not hard, so LSODA will use Adams at every iteration. You can verify this by returning infodict ( ...,full_output=True ) and checking infodict['mused'] .

Since the Adams method does not use the Jacobian, your gradient function will never be called. However, if you give odeint tough function for integration, for example, the Van der Pol equation :

 def vanderpol(y, t, mu=1000.): return [y[1], mu*(1. - y[0]**2)*y[1] - y[0]] def vanderpol_jac(y, t, mu=1000.): return [[0, 1], [-2*y[0]*y[1]*mu - 1, mu*(1 - y[0]**2)]] y0 = [2, 0] t = arange(0, 5000, 1) y,info = odeint(vanderpol, y0, t, Dfun=vanderpol_jac, full_output=True) print info['mused'] # method used (1=adams, 2=bdf) print info['nje'] # cumulative number of jacobian evaluations plot(t, y[:,0]) 

you should see that odeint switches to using BDF, and now the Jacobian function is called.

If you need more control over the solver, you should learn scipy.integrate.ode , which is a much more flexible object-oriented interface to several different integrators.

+13
source share

All Articles