# Implementation of multistep methods¶

We now consider some of the practical issues that arise when multistep formulas are used to solve IVPs. Implementation of the explicit case is relatively straightforward. In what follows we use boldface for the vector form of the problem, \(\mathbf{u}'=\mathbf{f}(t,\mathbf{u})\). For instance, the explicit AB4 method is defined by the formula

ab4 shows a basic implementation of this formula. Observe that rk4 is used to find the starting values \(\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3\) that are needed before the iteration formula takes over. As far as RK4 is concerned, it needs to solve the IVP over the time interval \(a \le t \le a+3h\), using a step size \(h\) (the same step size as in the AB4 iteration). These values are then used by ab4 to find \(\mathbf{f}_0,\ldots,\mathbf{f}_3\) and get the main iteration started.

For each value of \(i\) the formula uses the four most recently known values of the solution’s derivative in order to advance by one step. In ab4 only these values of \(\mathbf{f}\) are stored, and a matrix-vector product is used for the linear combination implied in (194):

We have distributed the factor of \(1/24\) in order to point out that the \(4\times 1\) constant vector is just the vector of coefficients of the generating polynomial \(\sigma(z)\) from (189). At the start of an iteration, the value of \(\mathbf{f}\) at the most recent solution step is unknown, so a call is made to evaluate it, and the other columns are shifted to the right (i.e., into the past).

**4th-order Adams–Bashforth formula for an IVP.**

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | ```
"""
ab4(ivp,n)
Apply the Adams-Bashforth 4th order method to solve the given IVP
using `n` time steps.
"""
function ab4(ivp,n)
# Time discretization.
a,b = ivp.tspan
h = (b-a)/n
t = [ a + i*h for i in 0:n ]
# Constants in the AB4 method.
k = 4; sigma = [55, -59, 37, -9]/24;
# Find starting values by RK4.
u = fill(float(ivp.u0),n+1)
rkivp = ODEProblem(ivp.f,ivp.u0,(a,a+(k-1)*h),ivp.p)
ts,us = rk4(rkivp,k-1)
u[1:k] = us[1:k]
# Compute history of u' values, from newest to oldest.
f = [ ivp.f(u[k-i],ivp.p,t[k-i]) for i in 1:k-1 ]
# Time stepping.
for i in k:n
f = [ ivp.f(u[i],ivp.p,t[i]), f[1:k-1]... ] # new value of du/dt
u[i+1] = u[i] + h*sum(f[j]*sigma[j] for j in 1:k) # advance a step
end
return t,u
end
``` |

## Implicit methods¶

**2nd-order Adams–Moulton (trapezoid) formula for an IVP.**

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | ```
"""
am2(ivp,n)
Apply the Adams-Moulton 2nd order method to solve given IVP using
`n` time steps.
"""
function am2(ivp,n)
# Time discretization.
a,b = ivp.tspan
h = (b-a)/n
t = [ a + i*h for i in 0:n ]
# Initialize output.
u = fill(float(ivp.u0),n+1)
# Time stepping.
for i in 1:n
# Data that does not depend on the new value.
known = u[i] + h/2*ivp.f(u[i],ivp.p,t[i])
# Find a root for the new value.
F = z -> z .- h/2*ivp.f(z,ivp.p,t[i+1]) .- known
unew = levenberg(F,known)
u[i+1] = unew[end]
end
return t,u
end
``` |

The implementation of an implicit multistep method is a bit more involved. Consider the second-order implicit formula AM2, also known as the trapezoid method. To advance from step \(i\) to \(i+1\), we need to solve

for \(\mathbf{z}\), and then set \(\mathbf{u}_{i+1}=\mathbf{z}\). This equation takes the form \(\mathbf{g}(\mathbf{z})=\boldsymbol{0}\), so we have a rootfinding problem as in Roots of nonlinear equations. An implementation of AM2 using levenberg from Quasi-Newton methods is shown in am2.

It defines a nested function called `trapzero`

that evaluates the left-hand side of (196), given any value of \(\mathbf{z}\). The time stepping iteration calls levenberg at each step, starting from the value \(\mathbf{u}_i+\tfrac{1}{2}h\mathbf{f}_i\) that is halfway between \(\mathbf{u}_i\) and the Euler step \(\mathbf{u}_i+h\mathbf{f}_i\). A robust code would have to intercept the case where levenberg fails to converge, but we have ignored this issue for the sake of simplicity.

## Stiff problems¶

At each time step in am2 (or any implicit IVP solver), a rootfinding iteration of unknown length is needed. This fact makes the cost of an implicit method much greater on a per-step basis than for an explicit one. Given this drawback, you are justified to wonder whether implicit methods are ever competitive! The answer is emphatically yes, as Accuracy isn’t everything demonstrates.

Although the result of Accuracy isn’t everything may seem strange, there is no contradiction: a fourth-order explicit formula is indeed more accurate than a second-order implicit one, in the limit \(h\to 0\). But there is another limit to consider, \(t\to \infty\) with \(h\) fixed, and in this one the implicit method wins. Such problems are called stiff. A complete mathematical description will wait for a later chapter, but a sure sign of stiffness is the presence of phenomena on widely different time scales. In the example, there is “slow time,” where the solution changes very little, and “fast time,” when it suddenly jumps from zero to one. For stiff problems, implicit methods are usually preferred, because they can take far fewer steps than an explicit method, more than offsetting the extra work required per step.

## Adaptivity¶

As with RK methods, we can run two time stepping methods simultaneously in order to estimate the error and adjust the step size accordingly. For example, we could pair AB3 with AB4 as practically no cost, because the methods differ only in how they include known information from the recent past. The more accurate AB4 value should allow an accurate estimate of the local error in the AB3 value, and so on.

Because multistep methods rely on the solution history, though, changing the step size is more complicated than for RK methods. If \(h\) is changed, then the historical values \(\mathbf{u}_{i-1},\mathbf{u}_{i-2}\ldots\) and \(\mathbf{f}_{i-1},\mathbf{f}_{i-2}\ldots\) are no longer given at the right moments in time to apply the iteration formula. A typical remedy is to use interpolation to re-evaluate the historical values at the appropriate times. The details are important but not especially illuminating, and we do not give them here.

## Exercises¶

⌨ For each IVP, use ab4 to find the solution over the indicated time interval for \(n=250\). Plot the computed solution \((t_i,u_i)\) for \(i=0,\ldots,n\), and separately plot the error \(\bigl(t_i,u_i-\hat{u}(t_i)\bigr)\).

**(a)**\(u' = -2t u, \ 0 \le t \le 2, \ u(0) = 2;\ \hat{u}(t) = 2e^{-t^2}\)**(b)**\(u' = u + t, \ 0 \le t \le 1, \ u(0) = 2;\ \hat{u}(t) = 1-t+e^t\)**(c)**\(u' = x^2/[u(1+x^3)],\ 0 \le x \le 3, \ u(0) =1;\ \hat{u}(x) =[1+(2/3)\ln (1+x^3)]^{1/2}\)**(d)**\(u''+ 9u = 9t, \: 0< t< 2\pi, \: u(0) =1,\: u'(0) = 1; \: \hat{u}(t) = t+\cos (3t)\)**(e)**\(u''+ 9u = \sin(2t), \: 0< t< 2\pi, \: u(0) =2,\: u'(0) = 1\); \(\quad \hat{u}(t) = (1/5) \sin(3t) + 2 \cos (3t)+ (1/5) \sin (2t)\)**(f)**\(u''- 9u = 9t \: 0< t< 1, \: u(0) =2,\: u'(0) = -1; \: \hat{u}(t) = e^{3t} + e^{-3t}-t\)**(g)**\(u''+ 4u'+ 4u = t, \: 0< t< 4, \: u(0) =1,\: u'(0) = 3/4; \: \hat{u}(t) = (3t+5/4)e^{-2t} + (t-1)/4\)**(h)**\(x^2 u'' +5xu' + 4u = 0,\: 1<x<e^2, \: u(1) =1, \: u'(1) = -1; \: \hat{u}(x) = x^{-2}( 1 + \ln x)\)**(i)**\(2 x^2 u'' +3xu' - u = 0,\: 1<x<16, \: u(1) =4, \: u'(1) = -1\); \(\quad \hat{u}(x) = 2(x^{1/2} + x^{-1})\)**(j)**\(x^2 u'' -xu' + 2u = 0,\: 1<x<e^{\pi}, \: u(1) =3, \: u'(1) = 4\); \(\quad \hat{u}(x) = x \left[ 3 \cos \left( \ln x \right)+\sin \left( \ln x \right) \right]\)⌨ For each IVP in the preceding problem, use ab4 for \(n=10\cdot2^d\) and \(d=1,\ldots,10\). Make a log-log convergence plot for the final time error \(|u_n-\hat{u}(t_n)|\) versus \(n\), and add a straight line indicating fourth-order convergence.

⌨ Line 35 of ab4 reads

u(:,i+1) = u(:,i) + h*(f*sigma);

Explain carefully why this is preferable to

u(:,i+1) = u(:,i) + h*f*sigma;

⌨ Repeat exercise 1 above using am2.

⌨ Repeat exercise 2 above using am2 and comparing to second-order rather than fourth-order convergence.

⌨ Using am2 as a model, write a function

`bd2`

that applies the BD2 method to solve an IVP. Test the convergence of your function on one of the IVPs in exercise 1 above.⌨ For numerical purposes, the exact solution of the IVP in Accuracy isn’t everything satisfies \(\hat{u}(400)=1\).

**(a)**Use ab4 with \(n=200,400,600,\ldots,2000\) and make a log-log convergence plot of the error \(|u_n-1|\) as a function of \(n\).**(b)**Repeat part~(a) using am2.Consider the IVP

\[\begin{split}\mathbf{u}'(t) = \mathbf{A} \mathbf{u}(t), \quad \mathbf{A}= \begin{bmatrix} 0&-4\\4&0 \end{bmatrix}, \quad \mathbf{u}(0) = \begin{bmatrix} 1\\0 \end{bmatrix}.\end{split}\]**(a)**✍ Define \(E(t) = \bigl\|\mathbf{u}(t)\bigr\|_2^2\). Show that \(E(t)\) is constant. (Differentiate \(u^Tu\) with respect to time and show that it simplifies to zero.)**(b)**⌨ Use ab4 to solve the IVP for \(t\in[0,20]\) with \(n=100\) and \(n=150\). On a single graph using a log scale on the \(y\)-axis, plot \(|E(t)-E(0)|\) versus time for both solutions. You should see exponential growth in time.**(c)**⌨ Repeat part~(b) with \(n=400\) and \(n=600\), but use a linear scale on the \(y\)-axis. Now you should see only linear growth of \(|E(t)-E(0)|\).⌨

**(a)**Modify ab4 to implement the AB2 method.**(b)**Repeat part (b) of the preceding exercise, using AB2 in place of AB4.**(c)**Repeat part (c) of the preceding exercise, using AB2 in place of AB4.