How can I refer to a specific point of my function inside NDSolve?

Problem:

I am trying to solve this differential equation:

K[x_, x1_] := 1; NDSolve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}], A[0] == 0, A'[1] == 1}, A[x], x] 

and I get errors ( Function::slotn and NDSolve::ndnum )
(it should return a numerical function equal to 3/16 x^2 + 5/8 x )

I am looking for a way to solve this differential equation: is there a way to write it in better form so that NDSolve understands this? Is there any other feature or package that can help you?

Note 1: In my complete problem K[x, x1] not 1 - it depends (in a complicated way) on x and x1 .
Note 2: The naive derivation of the two sides of the equation with respect to x will not work, since the integral limits are defined.

My first impression:

It seems that I don’t like Mathematica, referring to a point in A[x] - the same errors occur when I make this simplified version:

 NDSolve[{A''[x] == A[0.5], A[0] == 0, A'[1] == 1}, A[x], x] 

(it should return a numerical function that is 2/11 x^2 + 7/11 x )

In this case, this problem can be avoided by analytically solving A''[x] == c and then finding c , but in my first problem it does not seem to work - it turns the differential equation into an integral one, which (N) DSolve does not resolve afterwards.

+4
source share
3 answers

I can suggest a way to reduce your equation to an integral equation that can be solved numerically by approximating its core with a matrix, thereby reducing the integration to matrix multiplication.

First, it’s clear that the equation can be integrated twice over x , first from 1 to x , and then from 0 to x , so that:

enter image description here

Now we can discretize this equation by putting it on an equidistant grid:

enter image description here

Here A[x] becomes a vector, and the integrated core iniIntK becomes a matrix, and integration is replaced by matrix multiplication. Then the task is reduced to a system of linear equations.

The simplest case (which I will consider here) is when the iniIntK kernel can be obtained analytically - in this case this method will be quite fast. Here is the function to create an integrated kernel as a pure function:

 Clear[computeDoubleIntK] computeDoubleIntK[kernelF_] := Block[{x, x1}, Function[ Evaluate[ Integrate[ Integrate[kernelF[y, x1], {y, 1, x}] /. x -> y, {y, 0, x}] /. {x -> #1, x1 -> #2}]]]; 

In our case:

 In[99]:= K[x_,x1_]:=1; In[100]:= kernel = computeDoubleIntK[K] Out[100]= -#1+#1^2/2& 

Here is the function to create the kernel matrix and the vector rh, s:

  computeDiscreteKernelMatrixAndRHS[intkernel_, a0_, aprime1_ , delta_, interval : {_, _}] := Module[{grid, rhs, matrix}, grid = Range[Sequence @@ interval, delta]; rhs = a0 + aprime1*grid; (* constant plus a linear term *) matrix = IdentityMatrix[Length[grid]] - delta*Outer[intkernel, grid, grid]; {matrix, rhs}] 

To give a very rough idea of ​​how this might look (I use delta = 1/2 ):

 In[101]:= computeDiscreteKernelMatrixAndRHS[kernel,0,1,1/2,{0,1}] Out[101]= {{{1,0,0},{3/16,19/16,3/16},{1/4,1/4,5/4}},{0,1/2,1}} 

Now we need to solve the linear equation and interpolate the result, which is performed using the following function:

 Clear[computeSolution]; computeSolution[intkernel_, a0_, aprime1_ , delta_, interval : {_, _}] := With[{grid = Range[Sequence @@ interval, delta]}, Interpolation@Transpose [{ grid, LinearSolve @@ computeDiscreteKernelMatrixAndRHS[intkernel, a0, aprime1, delta,interval] }]] 

Here I will name it with delta = 0.1 :

 In[90]:= solA = computeSolution[kernel,0,1,0.1,{0,1}] Out[90]= InterpolatingFunction[{{0.,1.}},<>] 

Now we will build the result in comparison with the exact analytical solution found by @Sasha, as well as with an error:

enter image description here

I deliberately chose delta large enough to make errors visible. If you selected delta say 0.01 , the graphs will be visually identical. Of course, the price of making a smaller delta is the need to create and solve large matrices.

For kernels that can be obtained analytically, the main bottleneck will be in LinearSolve , but in practice it is pretty fast (for matrices it is not too large). When kernels cannot be integrated analytically, the main bottleneck will be the calculation of the kernel at many points (matrix creation. The inverse matrix has great asymptotic complexity, but this will begin to play a role for really large matrices - which are not needed in this approach, since it can be combined with iterative - see below). Usually you define:

 intK[x_?NumericQ, x1_?NumericQ] := NIntegrate[K[y, x1], {y, 1, x}] intIntK[x_?NumericQ, x1_?NumericQ] := NIntegrate[intK[z, x1], {z, 0, x}] 

As a way to speed it up in such cases, you can pre-copy the intK core to the grid and then interpolate the same for intIntK . This, however, will lead to additional errors that you will have to evaluate (consider).

The grid itself does not have to be equidistant (I just used it for simplicity), but it can (and probably should) be adaptive and, as a rule, uneven.

As a final illustration, we consider an equation with a nontrivial, but symbolically integrable kernel:

 In[146]:= sinkern = computeDoubleIntK[50*Sin[Pi/2*(#1-#2)]&] Out[146]= (100 (2 Sin[1/2 \[Pi] (-#1+#2)]+Sin[(\[Pi] #2)/2] (-2+\[Pi] #1)))/\[Pi]^2& In[157]:= solSin = computeSolution[sinkern,0,1,0.01,{0,1}] Out[157]= InterpolatingFunction[{{0.,1.}},<>] 

enter image description here

Here are a few checks:

 In[163]:= Chop[{solSin[0],solSin'[1]}] Out[163]= {0,1.} In[153]:= diff[x_?NumericQ]:= solSin''[x] - NIntegrate[50*Sin[Pi/2*(#1-#2)]&[x,x1]*solSin[x1],{x1,0,1}]; In[162]:= diff/@Range[0,1,0.1] Out[162]= {-0.0675775,-0.0654974,-0.0632056,-0.0593575,-0.0540479,-0.0474074, -0.0395995,-0.0308166,-0.0212749,-0.0112093,0.000369261} 

In conclusion, I just want to emphasize that for this method it is necessary to conduct a thorough analysis of the error assessment, which I did not do.

EDIT

You can also use this method to obtain an initial approximate solution, and then iteratively improve it using FixedPoint or other means - this way you will have relatively fast convergence and be able to achieve the required accuracy without having to build and solve huge matrices.

+7
source

This complements the approach of Leonid Shifrin. We start with a linear function that interpolates the value and the first derivative at the starting point. We use this when integrating with a given kernel function. Then we can iterate using each previous approximation in the integrated kernel, which is used for the next approximation.

An example is shown below using a more complex kernel than just a constant function. I'll take it in two iterations and show the discrepancy tables.

 kernel[x_, y_] := Sqrt[x]/(y^2 + 1/5)*Sin[x^2 + y] intkern[x_?NumericQ, aa_] := NIntegrate[kernel[x, y]*aa[y], {y, 0, 1}, MinRecursion -> 2, AccuracyGoal -> 3] Clear[a]; a0 = 0; a1 = 1; a[0][x_] := a0 + a1*x soln1 = a[1][x] /. First[NDSolve[{(a[1]^\[Prime]\[Prime])[x] == intkern[x, a[0], y], a[1][0] == a0, a[1][1] == a1}, a[1][x], {x, 0, 1}]]; a[1][x_] = soln1; In[283]:= Table[a[1]''[x] - intkern[x, a[1]], {x, 0., 1, .1}] Out[283]= {4.336808689942018*10^-19, 0.01145100326794241, \ 0.01721655945379122, 0.02313249302884235, 0.02990900241909161, \ 0.03778448183557359, 0.04676409320217928, 0.05657128568058478, \ 0.06665818935524814, 0.07624149919589895, 0.08412643746245929} In[285]:= soln2 = a[2][x] /. First[NDSolve[{(a[2]^\[Prime]\[Prime])[x] == intkern[x, a[1]], a[2][0] == a0, a[2][1] == a1}, a[2][x], {x, 0, 1}]]; a[2][x_] = soln2; In[287]:= Table[a[2]''[x] - intkern[x, a[2]], {x, 0., 1, .1}] Out[287]= {-2.168404344971009*10^-19, -0.001009606971360516, \ -0.00152476679745811, -0.002045817184941901, -0.002645356229312557, \ -0.003343218015068372, -0.004121109614310836, -0.004977453722712966, \ -0.005846840469889258, -0.006731367269472544, -0.007404971586975062} 

Thus, at this stage, we have errors less than 0.01. Not bad. One of the drawbacks is that it was rather slow to get a second approximation. There may be ways to customize NDSolve to improve this.

This complements the Leonid method for two reasons.

(1) If this did not converge well, because the initial linear approximation was not close enough to the true result, instead, one could start with the approximation found using the finite difference scheme. That would be like what he did.

(2) To a large extent, he indicated this himself, as a method that could follow him and make refinements.

Daniel Lichtblau

+4
source

The way you write your equation is A''[x] == const , and the constant is independent of x . Therefore, the solution always has the form of a quadratic polynomial. Then your task comes down to a solution for undefined coefficients:

 In[13]:= A[x_] := a2 x^2 + a1 x + a0; In[14]:= K[x_, x1_] := 1; In[16]:= Solve[{A''[x] == Integrate[K[x, x1] A[x1], {x1, 0, 1}], A[0] == 0, A'[1] == 1}, {a2, a1, a0}] Out[16]= {{a2 -> 3/16, a1 -> 5/8, a0 -> 0}} In[17]:= A[x] /. First[%] Out[17]= (5 x)/8 + (3 x^2)/16 
0
source

All Articles