Odd SciPy ODE Integration Error

I implement a very simple model with susceptible-infected recovery with a stable population for a project on the idle side - this is usually a fairly simple task. But I solve bugs using PysCeS or SciPy, both of which use lsoda as the main solver. This only happens for certain parameter values, and I don't understand why. The code I use is as follows:

import numpy as np from pylab import * import scipy.integrate as spi #Parameter Values S0 = 99. I0 = 1. R0 = 0. PopIn= (S0, I0, R0) beta= 0.50 gamma=1/10. mu = 1/25550. t_end = 15000. t_start = 1. t_step = 1. t_interval = np.arange(t_start, t_end, t_step) #Solving the differential equation. Solves over t for initial conditions PopIn def eq_system(PopIn,t): '''Defining SIR System of Equations''' #Creating an array of equations Eqs= np.zeros((3)) Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2]) Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1]) Eqs[2]= gamma*PopIn[1] - mu*PopIn[2] return Eqs SIR = spi.odeint(eq_system, PopIn, t_interval) 

This results in the following error:

  lsoda-- at current t (=r1), mxstep (=i1) steps taken on this call before reaching tout In above message, I1 = 500 In above message, R1 = 0.7818108252072E+04 Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information. 

Usually, when I encounter such a problem, there is something completely wrong with the system of equations that I set up, but I do not see anything wrong with that. Strange, this also works if you change mu to something like 1/15550 . In case something is wrong with the system, I also implemented the model in R as follows:

 require(deSolve) sir.model <- function (t, x, params) { S <- x[1] I <- x[2] R <- x[3] with ( as.list(params), { dS <- -beta*S*I/(S+I+R) - mu*S + mu*(S+I+R) dI <- beta*S*I/(S+I+R) - gamma*I - mu*I dR <- gamma*I - mu*R res <- c(dS,dI,dR) list(res) } ) } times <- seq(0,15000,by=1) params <- c( beta <- 0.50, gamma <- 1/10, mu <- 1/25550 ) xstart <- c(S = 99, I = 1, R= 0) out <- as.data.frame(lsoda(xstart,times,sir.model,params)) 

It also uses lsoda, but it seems to go away without a hitch. Can anyone understand what is going on in Python code?

+7
source share
2 answers

I think that for the selected parameters you have problems with rigidity - due to numerical instability, the step size of the solver is equal to becoming very small in regions where the slope of the solution curve is actually quite small. The Fortran lsoda solver, which is wrapped in scipy.integrate.odeint , tries to adapt between methods that are suitable for hard and non-hard systems, but in this case, it seems that it is not possible to switch to hard methods.

Very roughly, you can simply increase the maximum allowable steps, and the solver will get there:

 SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000) 

The best option is to use the ODE scipy.integrate.ode object-oriented ODE scipy.integrate.ode , which allows you to explicitly choose whether to use hard or non-hard methods:

 import numpy as np from pylab import * import scipy.integrate as spi def run(): #Parameter Values S0 = 99. I0 = 1. R0 = 0. PopIn= (S0, I0, R0) beta= 0.50 gamma=1/10. mu = 1/25550. t_end = 15000. t_start = 1. t_step = 1. t_interval = np.arange(t_start, t_end, t_step) #Solving the differential equation. Solves over t for initial conditions PopIn def eq_system(t,PopIn): '''Defining SIR System of Equations''' #Creating an array of equations Eqs= np.zeros((3)) Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2]) Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1]) Eqs[2]= gamma*PopIn[1] - mu*PopIn[2] return Eqs ode = spi.ode(eq_system) # BDF method suited to stiff systems of ODEs ode.set_integrator('vode',nsteps=500,method='bdf') ode.set_initial_value(PopIn,t_start) ts = [] ys = [] while ode.successful() and ode.t < t_end: ode.integrate(ode.t + t_step) ts.append(ode.t) ys.append(ode.y) t = np.vstack(ts) s,i,r = np.vstack(ys).T fig,ax = subplots(1,1) ax.hold(True) ax.plot(t,s,label='Susceptible') ax.plot(t,i,label='Infected') ax.plot(t,r,label='Recovered') ax.set_xlim(t_start,t_end) ax.set_ylim(0,100) ax.set_xlabel('Time') ax.set_ylabel('Percent') ax.legend(loc=0,fancybox=True) return t,s,i,r,fig,ax 

Output:

enter image description here

+11
source

The infected PopIn[1] population PopIn[1] breaks up to zero. Apparently, the (normal) numerical inaccuracy leads to the fact that PopIn[1] becomes negative (approximately -3.549e-12) near t = 322.9. Then ultimately the solution is inflated near t = 7818.093, with PopIn[0] going in the + infinity direction and PopIn[1] going to -infinity.

Edit: I deleted my previous sentence for a β€œquick fix.” It was a controversial hack.

+1
source

All Articles