Skip to article frontmatterSkip to article content

Multistep methods

In Runge–Kutta methods we start at uiu_i to find ui+1{u}_{i+1}, taking multiple ff-evaluations (stages) to achieve high accuracy. In contrast, multistep methods boost accuracy by employing more of the history of the solution, taking information from the recent past. For the discussion in this and following sections, we introduce the shorthand notation

fi=f(ti,ui).f_i = f(t_i,u_i).

The quantities uu and ff in (6.6.2) are shown as scalars, but in general they can be vectors.

In order to use (6.6.2) as a numerical method, we iterate through i=k1,,n1i=k-1,\ldots,n-1. The value u0u_0 is determined by the initial condition, but we also need some way of generating the starting values

u1=α1,uk1=αk1.u_1=\alpha_1, \quad \ldots \quad u_{k-1}=\alpha_{k-1}.

In practice the starting values are often found using an RK formula.[1]

The difference formula (6.6.2) defines ui+1{u}_{i+1} in terms of known values of the solution and its derivative from the past. In the explicit case with bk=0b_k=0, Equation (6.6.2) immediately gives a formula for the unknown quantity ui+1{u}_{i+1} in terms of values at time level tit_i and earlier. Thus only one new evaluation of ff is needed to make a time step, provided that we store the recent history.

For an implicit method, however, bk0b_k\neq 0 and (6.6.2) has the form

ui+1hbkf(ti+1,ui+1)=F(ui,ui1,,uik+1). {u}_{i+1} - hb_kf(t_{i+1},{u}_{i+1}) = F(u_i,u_{i-1},\ldots,u_{i-k+1}).

Now the unknown ui+1{u}_{i+1} that we seek appears inside the function ff. In general this equation is a nonlinear rootfinding problem for ui+1{u}_{i+1} and is not solvable in a finite number of steps by a formula. The implementation of both explicit and implicit multistep formulas is discussed in detail in Implementation of multistep methods.

As with RK formulas, a multistep method is entirely specified by the values of a few constants. Table 6.6.1 and Table 6.6.2 present some of the most well-known and important formulas. The Adams–Bashforth (AB) methods are explicit, while Adams–Moulton (AM) and backward differentiation formulas (BD) are implicit. The tables also list the methods’ order of accuracy, to be defined shortly. We adopt the convention of referring to a multistep method by appending its order of accuracy to a two-letter name abbreviation, e.g., the AB3 method.

Table 6.6.1:Coefficients of Adams multistep formulas. All have ak1=1a_{k-1}=1 and ak2==a0=0a_{k-2} = \cdots = a_0 = 0.

name/ordersteps kkbkb_kbk1b_{k-1}bk2b_{k-2}bk3b_{k-3}bk4b_{k-4}
AB1101(Euler)
AB22032\frac{3}{2}12-\frac{1}{2}
AB3302312\frac{23}{12}1612-\frac{16}{12}512\frac{5}{12}
AB4405524\frac{55}{24}5924-\frac{59}{24}3724\frac{37}{24}924-\frac{9}{24}
AM111(Backward Euler)
AM2112\frac{1}{2}12\frac{1}{2}(Trapezoid)
AM32512\frac{5}{12}812\frac{8}{12}112-\frac{1}{12}
AM43924\frac{9}{24}1924\frac{19}{24}524-\frac{5}{24}124\frac{1}{24}
AM54251720\frac{251}{720}646720\frac{646}{720}264720-\frac{264}{720}106720\frac{106}{720}19720-\frac{19}{720}

Table 6.6.2:Coefficients of backward differentiation formulas. All have bk0b_k\neq 0 and bk1==b0=0b_{k-1} = \cdots = b_0 = 0.

name/ordersteps kkak1a_{k-1}ak2a_{k-2}ak3a_{k-3}ak4a_{k-4}bkb_k
BD111(Backward Euler)1
BD2243\frac{4}{3}13-\frac{1}{3}23\frac{2}{3}
BD331811\frac{18}{11}911-\frac{9}{11}211\frac{2}{11}611\frac{6}{11}
BD444825\frac{48}{25}3625-\frac{36}{25}1625\frac{16}{25}325-\frac{3}{25}1225\frac{12}{25}

6.6.1Generating polynomials

An alternative description of a multistep method is the generating polynomials

ρ(z)=zkak1zk1a0,σ(z)=bkzk+bk1zk1++b0.\begin{align*} \rho(z) &= z^k - a_{k-1} z^{k-1} - \cdots - a_0,\\ \sigma(z) &= b_k z^k + b_{k-1}z^{k-1} + \cdots + b_0. \end{align*}

For example, the AB3 method is completely specified by

ρ(z)=z3z2,σ(z)=112(23z216z+5). \rho(z) = z^3-z^2, \qquad \sigma(z) = \tfrac{1}{12}(23z^2-16z+5).

The connection between the generating polynomials and the numerical method requires a little abstraction. Let Z\mathcal{Z} be a forward-shift operator, so that, for example, Zti=ti+1\mathcal{Z} t_i = t_{i+1}, Z3ui1=ui+2\mathcal{Z}^3 u_{i-1} = u_{i+2}, and so on. With this, the difference formula (6.6.2) can be written concisely as

ρ(Z)uik+1=hσ(Z)fik+1. \rho(\mathcal{Z}) u_{i-k+1} = h \sigma(\mathcal{Z}) f_{i-k+1}.

6.6.2Truncation and global error

The definition of local truncation error is easily extended to multistep methods.

Although we will not present the analysis, the main conclusion for the multistep methods in this section is the same as for one-step methods.

6.6.3Derivation of the formulas

Where do coefficients like those in Table 6.6.1 come from? There are different ways to answer that question, but Adams and BD methods have distinctive stories to tell. The derivation of Adams methods begins with the observation that

u^(ti+1)=u^(ti)+titi+1u^(t)dt=u^(ti)+titi+1f(t,u^(t))dt. \hat{u}(t_{i+1}) = \hat{u}(t_i) + \int_{t_i}^{t_{i+1}} \hat{u}'(t) \, dt = \hat{u}(t_i) + \int_{t_i}^{t_{i+1}} f\bigl(t,\hat{u}(t)\bigr) \, dt.

As a result, a kk-step Adams method always has ρ(z)=zkzk1\rho(z)=z^k-z^{k-1}. While the integrand above is unknown over the interval of integration, we can approximate it by a polynomial interpolant of the historical values of ff. That polynomial can be integrated analytically, leading to a derivation of the coefficients b0,,bkb_0,\ldots,b_k.

In AB methods, the interpolating polynomial has degree k1k-1, which means that its interpolation error is O(hk)O(h^k). Upon integrating we get a local error of O(hk+1)O(h^{k+1}), which reduces to a global error of O(hk)O(h^k). The AM interpolating polynomial is one degree larger, so its order of accuracy is one higher for the same number of steps.

The idea behind backward differentiation formulas is complementary to that for Adams: Interpolate solution values ui+1,,uik+1{u}_{i+1},\ldots,u_{i-k+1} by a polynomial qq, and then, motivated by f(t,u^)=u^(t)f(t,\hat{u})=\hat{u}'(t), set

fi+1=q(ti+1). f_{i+1} =q'(t_{i+1}).

The quantity q(ti+1)q'(t_{i+1}) can be approximated by a finite difference of the past solution values, leading to the coefficients of ρ(z)\rho(z) and σ(z)=bkzk\sigma(z)=b_k z^k.

6.6.4Exercises

  1. ✍ For each method, write out the generating polynomials ρ(z)\rho(z) and σ(z)\sigma(z).

    (a) AM2, (b) AB2, (c) BD2, (d) AM3, (e) AB3.

  2. ✍ Write out by hand an equation that defines the first solution value u1u_1 produced by AM1 (backward Euler) for each IVP. (Reminder: This is an implicit formula.)

    (a) u=2tu,0t2,u0=2,h=0.2u' = -2t u, \quad 0 \le t \le 2, \quad u_0 = 2, \quad h = 0.2

    (b) u=u+t,0t1,u0=2,h=0.1u' = u + t, \quad 0 \le t \le 1, \quad u_0 = 2, \quad h = 0.1

    (c) (1+x3)uu=x2,0x3,u0=1,,h=0.5(1+x^3)uu' = x^2,\quad 0 \le x \le 3, \quad u_0=1, , \quad h = 0.5

  3. ✍ Do the preceding exercise for AM2 (trapezoid) instead of backward Euler.

  4. ✍ For each method, find the leading term in the local truncation error using (6.6.8).

    (a) AM2, (b) AB2, (c) BD2.

  5. ✍/ ⌨ For each method, find the leading term in the local truncation error using (6.6.8). (Computer algebra is recommended.)

    (a) AM3, (b) AB3, (c) BD4.

  6. ✍ A formula for the quadratic polynomial interpolant through the points (s1,y1)(s_1,y_1), (s2,y2)(s_2,y_2), and (s3,y3)(s_3,y_3) is

    p(x)=(xs2)(xs3)(s1s2)(s1s3)y1+(xs1)(xs3)(s2s1)(s2s3)y2+(xs1)(xs2)(s3s1)(s3s2)y3.p(x) = \frac{(x-s_2)(x-s_3)}{(s_1-s_2)(s_1-s_3)}\,y_1 + \frac{(x-s_1)(x-s_3)}{(s_2-s_1)(s_2-s_3)}\,y_2 + \frac{(x-s_1)(x-s_2)}{(s_3-s_1)(s_3-s_2)}\,y_3.

    (a) Use (6.6.13) and a polynomial interpolant through three points to derive the coefficients of the AM3 method.

    (b) Use (6.6.16) and a polynomial interpolant through three points to derive the coefficients of the BD2 method.

  7. ✍ By doing series expansion about the point z=1z=1, show for BD2 that

    ρ(z)σ(z)log(z1)=O((z1)3).\frac{\rho(z)}{\sigma(z)} - \log(z-1) = O\bigl( (z-1)^3 \bigr).
  8. ✍/ ⌨ By doing series expansion about the point z=1z=1, show for AB3 and AM3 that

    ρ(z)σ(z)log(z1)=O((z1)4).\frac{\rho(z)}{\sigma(z)} - \log(z-1) = O\bigl( (z-1)^4 \bigr).

    (Computer algebra is recommended.)

Footnotes
  1. If we must use an RK method to start anyway, why bother with multistep formulas at all? The answer is that multistep methods can be more efficient in some problems, even at the same order of accuracy.