We now consider some of the practical issues that arise when multistep formulas are used to solve IVPs. In this section we emphasize the vector IVP, u ′ = f ( t , u ) \mathbf{u}'=\mathbf{f}(t,\mathbf{u}) u ′ = f ( t , u ) , and use boldface in the difference formula (6.6.2) as well.
6.7.1 Explicit methods ¶ As a concrete example, the AB4 method is defined by the formula
u i + 1 = u i + h ( 55 24 f i − 59 24 f i − 1 + 37 24 f i − 2 − 9 24 f i − 3 ) , i = 3 , … , n − 1. \mathbf{u}_{i+1} = \mathbf{u}_i + h\, ( \tfrac{55}{24}\mathbf{f}_i - \tfrac{59}{24} \mathbf{f}_{i-1} + \tfrac{37}{24}\mathbf{f}_{i-2} - \tfrac{9}{24}\mathbf{f}_{i-3}), \quad i=3,\ldots,n-1. u i + 1 = u i + h ( 24 55 f i − 24 59 f i − 1 + 24 37 f i − 2 − 24 9 f i − 3 ) , i = 3 , … , n − 1. Function 6.7.1 shows a basic implementation of AB4.
Observe that Function 6.4.2 is used to find the starting values u 1 , u 2 , u 3 \mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 u 1 , u 2 , u 3 that are needed before the iteration formula takes over. As far as RK4 is concerned, it needs to solve (the same step size as in the AB4 iteration). These results are then used to find f 0 , … , f 3 \mathbf{f}_0,\ldots,\mathbf{f}_3 f 0 , … , f 3 and get the main iteration started.
4th-order Adams–Bashforth formula for an IVPAbout the code
Line 15 sets σ
to be the coefficients of the generating polynomial σ ( z ) \sigma(z) σ ( z ) of AB4. Lines 19--21 set up the IVP over the time interval a ≤ t ≤ a + 3 h a \le t \le a+3 h a ≤ t ≤ a + 3 h , call rk4
to solve it using the step size h h h , and use the result to fill the first four values of the solution. Then line 24 computes the vector [ f 2 , f 1 , f 0 ] [f_2,f_1,f_0] [ f 2 , f 1 , f 0 ] .
Line 28 computes f i f_i f i , based on the most recent solution value and time. That goes into the first spot of f
, followed by the three values that were previously most recent. These are the four values that appear in (6.7.1) . Each particular f i f_i f i value starts at the front of f
, moves through each position in the vector over three iterations, and then is forgotten.
4th-order Adams–Bashforth formula for an IVP1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function [t, u] = ab4(ivp, a, b, n)
%AB4 4th-order Adams-Bashforth formula for an IVP.
% Input:
% ivp structure defining the IVP
% a, b endpoints of time interval (scalars)
% n number of time steps (integer)
% Output:
% t selected nodes (vector, length n+1)
% u solution values (array, m by (n+1))
du_dt = ivp.ODEFcn;
u0 = ivp.InitialValue;
p = ivp.Parameters;
% Define time discretization.
h = (b - a) / n;
t = a + h * (0:n)';
% Constants in the AB4 method.
k = 4;
sigma = [55; -59; 37; -9] / 24;
% Find starting values by RK4.
[ts, us] = rk4(ivp, a, a + (k-1)*h, k-1);
u = zeros(length(u0), n+1);
u(:, 1:k) = us(:, 1:k);
% Compute history of u' values, from oldest to newest.
f = zeros(length(u0), k);
for i = 1:k-1
f(:, k-i) = du_dt(t(i), u(:, i), p);
end
% Time stepping.
for i = k:n
f = [du_dt(t(i), u(:, i), p), f(:, 1:k-1)]; % new value of du/dt
u(:, i+1) = u(:, i) + h * (f * sigma); % advance one step
end
About the code
Line 21 sets sigma
to be the coefficients of the generating polynomial σ ( z ) \sigma(z) σ ( z ) of AB4. Lines 24--26 set up the IVP over the time interval a ≤ t ≤ a + 3 h a \le t \le a+3 h a ≤ t ≤ a + 3 h , call rk4
to solve it using the step size h h h , and use the result to fill the first four values of the solution. Then lines 29--32 compute the vector [ f 2 , f 1 , f 0 ] [f_2,f_1,f_0] [ f 2 , f 1 , f 0 ] .
Line 36 computes f i f_i f i , based on the most recent solution value and time. That goes into the first column of f
, followed by the three values that were previously most recent. These are the four values that appear in (6.7.1) . Each particular f i f_i f i value starts at the front of f
, moves through each position in the vector over three iterations, and then is forgotten.
4th-order Adams–Bashforth formula for an IVP1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
def ab4(dudt, tspan, u0, n):
"""
ab4(dudt,tspan,u0,n)
Apply the Adams-Bashforth 4th order method to solve the vector-valued IVP u'=`dudt`(u,p,t)
over the interval `tspan` with u(`tspan[1]`)=`u0`, using `n` subintervals/steps.
"""
# Time discretization.
a, b = tspan
h = (b - a) / n
t = np.linspace(a, b, n + 1)
# Constants in the AB4 method.
k = 4
sigma = np.array([55, -59, 37, -9]) / 24
# Find starting values by RK4.
ts, us = rk4(dudt, [a, a + (k - 1) * h], u0, k - 1)
u = np.tile(np.array(u0), (n+1, 1))
u[:k] = us[:k].T
# Compute history of u' values, from newest to oldest.
f = np.array([dudt(t[k-j-2], u[k-j-2]) for j in range(k)])
# Time stepping.
for i in range(k-1, n):
f = np.vstack([dudt(t[i], u[i]), f[:-1]]) # new value of du/dt
u[i+1] = u[i] + h * np.dot(sigma, f) # advance one step
return t, u.T
About the code
Line 15 sets σ
to be the coefficients of the generating polynomial σ ( z ) \sigma(z) σ ( z ) of AB4. Lines 19--21 set up the IVP over the time interval a ≤ t ≤ a + 3 h a \le t \le a+3 h a ≤ t ≤ a + 3 h , call rk4
to solve it using the step size h h h , and use the result to fill the first four values of the solution. Then line 24 computes the vector [ f 2 , f 1 , f 0 ] [f_2,f_1,f_0] [ f 2 , f 1 , f 0 ] .
Line 28 computes f i f_i f i , based on the most recent solution value and time. That goes into the first spot of f
, followed by the three values that were previously most recent. These are the four values that appear in (6.7.1) . Each particular f i f_i f i value starts at the front of f
, moves through each position in the vector over three iterations, and then is forgotten.
Convergence of Adams–BashforthWe study the convergence of AB4 using the IVP u ′ = sin [ ( u + t ) 2 ] u'=\sin[(u+t)^2] u ′ = sin [( u + t ) 2 ] over 0 ≤ t ≤ 4 0\le t \le 4 0 ≤ t ≤ 4 , with u ( 0 ) = − 1 u(0)=-1 u ( 0 ) = − 1 . As usual, solve
is called to give an accurate reference solution.
ivp = ODEProblem((u, p, t) -> sin((t + u)^2), -1.0, (0.0, 4.0))
u_ref = solve(ivp, Tsit5(), reltol=1e-14, abstol=1e-14);
Now we perform a convergence study of the AB4 code.
n = @. [round(Int, 4 * 10^k) for k in 0:0.5:3]
err = []
for n in n
t, u = FNC.ab4(ivp, n)
push!(err, norm(u_ref.(t) - u, Inf))
end
pretty_table([n err], header=["n", "inf-norm error"])
The method should converge as O ( h 4 ) O(h^4) O ( h 4 ) , so a log-log scale is appropriate for the errors.
plot(n, err, m=3, label="AB4",
xaxis=(:log10, L"n"), yaxis=(:log10, "inf-norm error"),
title="Convergence of AB4", leg=:bottomleft)
plot!(n, (n / n[1]) .^ (-4), l=:dash, label=L"O(n^{-4})")
Convergence of Adams–BashforthWe study the convergence of AB4 using the IVP u ′ = sin [ ( u + t ) 2 ] u'=\sin[(u+t)^2] u ′ = sin [( u + t ) 2 ] over 0 ≤ t ≤ 4 0\le t \le 4 0 ≤ t ≤ 4 , with u ( 0 ) = − 1 u(0)=-1 u ( 0 ) = − 1 . As usual, a built-in solver is called to give an accurate reference solution.
ivp = ode;
ivp.ODEFcn = @(t, u, p) sin((t + u)^2);
ivp.InitialValue = -1;
a = 0; b = 4;
ivp.AbsoluteTolerance = 1e-13;
ivp.RelativeTolerance = 1e-13;
u_ref = solutionFcn(ivp, a, b);
Now we perform a convergence study of the AB4 code.
n = round(4 * 10.^(0:0.5:3)');
err = [];
for i = 1:length(n)
[t, u] = ab4(ivp, a, b, n(i));
err(i) = norm(u_ref(t) - u, Inf);
end
disp(table(n, err, variableNames=["n", "inf-norm error"]))
The method should converge as O ( h 4 ) O(h^4) O ( h 4 ) , so a log-log scale is appropriate for the errors.
clf, loglog(n, err, '-o')
hold on
loglog(n, 0.5 * err(end) * (n / n(end)) .^ (-4), '--')
xlabel("n"); ylabel("inf-norm error")
title("Convergence of AB4")
legend("AB4", "O(n^{-4})", "location", "southwest")
Convergence of Adams–BashforthWe study the convergence of AB4 using the IVP u ′ = sin [ ( u + t ) 2 ] u'=\sin[(u+t)^2] u ′ = sin [( u + t ) 2 ] over 0 ≤ t ≤ 4 0\le t \le 4 0 ≤ t ≤ 4 , with u ( 0 ) = − 1 u(0)=-1 u ( 0 ) = − 1 . As usual, solve_ivp
is called to give an accurate reference solution.
from scipy.integrate import solve_ivp
du_dt = lambda t, u: sin((t + u)**2)
tspan = (0.0, 4.0)
u0 = [-1.0]
u_ref = solve_ivp(du_dt, tspan, u0, dense_output=True, rtol=1e-13, atol=1e-13).sol
Now we perform a convergence study of the AB4 code.
n = array([int(4 * 10**k) for k in linspace(0, 3, 7)])
err = []
results = PrettyTable(["n", "AB4 error"])
for i in range(len(n)):
t, u = FNC.ab4(du_dt, tspan, u0, n[i])
err.append( abs(u_ref(4)[0] - u[0][-1]) )
results.add_row([n[i], err[-1]])
print(results)
The method should converge as O ( h 4 ) O(h^4) O ( h 4 ) , so a log-log scale is appropriate for the errors.
loglog(n, err, "-o", label="AB4")
loglog(n, 0.5 * err[-1] * (n / n[-1])**(-4), "--", label="4th order")
xlabel("$n$"), ylabel("final error")
legend(), title("Convergence of AB4");
6.7.2 Implicit methods ¶ The implementation of an implicit multistep method is more difficult. Consider the second-order implicit formula AM2, also known as the trapezoid method. To advance from step i i i to i + 1 i+1 i + 1 , we need to solve
z − 1 2 h f ( t i + 1 , z ) = u i + 1 2 h f ( t i , u i ) \mathbf{z} - \tfrac{1}{2} h f(t_{i+1},\mathbf{z}) = \mathbf{u}_i + \tfrac{1}{2} h \mathbf{f}(t_i,\mathbf{u}_i) z − 2 1 h f ( t i + 1 , z ) = u i + 2 1 h f ( t i , u i ) for z \mathbf{z} z . This equation can be written as g ( z ) = 0 \mathbf{g}(\mathbf{z})=\boldsymbol{0} g ( z ) = 0 , so the rootfinding methods of Chapter 4 can be used. The new value u i + 1 \mathbf{u}_{i+1} u i + 1 is equal to the root of this equation.
An implementation of AM2 using Function 4.6.3 from Quasi-Newton methods is shown in Function 6.7.2 .
2nd-order Adams–Moulton (trapezoid) formula for an IVPAbout the code
Lines 22-23 define the function g \mathbf{g} g and call levenberg
to find the new solution value, using an Euler half-step as its starting value. A robust code would have to intercept the case where levenberg
fails to converge, but we have ignored this issue for the sake of brevity.
2nd-order Adams–Moulton (trapezoid) formula for an IVP1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
function [t, u] = am2(ivp, a, b, n)
% AM2 2nd-order Adams-Moulton (trapezoid) formula for an IVP.
% Input:
% ivp structure defining the IVP
% a, b endpoints of time interval (scalars)
% n number of time steps (integer)
% Output:
% t selected nodes (vector, length n+1)
% u solution values (array, m by (n+1))
du_dt = ivp.ODEFcn;
u0 = ivp.InitialValue;
p = ivp.Parameters;
% Define time discretization.
h = (b - a) / n;
t = a + h * (0:n)';
u = zeros(length(u0), n+1);
u(:, 1) = u0(:);
% Time stepping.
for i = 1:n
% Data that does not depend on the new value.
known = u(:,i) + h/2 * du_dt(t(i), u(:, i), p);
% Find a root for the new value.
unew = levenberg(@trapzero, known);
u(:, i+1) = unew(:, end);
end
% This function defines the rootfinding problem at each step.
function F = trapzero(z)
F = z - h/2 * du_dt(t(i+1), z, p) - known;
end
end % main function
About the code
Lines 32--34 define the function g \mathbf{g} g . This is sent to levenberg
in line~27 to find the new solution value, using an Euler half-step as its starting value. A robust code would have to intercept the case where levenberg
fails to converge, but we have ignored this issue for the sake of brevity.
2nd-order Adams–Moulton (trapezoid) formula for an IVP1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
def am2(dudt, tspan, u0, n):
"""
am2(dudt,tspan,u0,n)
Apply the Adams-Moulton 2nd order method to solve the vector-valued IVP u'=`dudt`(u,p,t)
over the interval `tspan` with u(`tspan[1]`)=`u0`, using `n` subintervals/steps.
"""
# Time discretization.
a, b = tspan
h = (b - a) / n
t = np.linspace(a, b, n + 1)
# Initialize output.
u = np.tile(np.array(u0), (n+1, 1))
# Time stepping.
for i in range(n):
# Data that does not depend on the new value.
known = u[i] + h / 2 * dudt(t[i], u[i])
# Find a root for the new value.
F = lambda z: z - h / 2 * dudt(t[i+1], z) - known
unew = levenberg(F, known)
u[i+1] = unew[:, -1]
return t, u.T
About the code
Lines 22-23 define the function g \mathbf{g} g and call levenberg
to find the new solution value, using an Euler half-step as its starting value. A robust code would have to intercept the case where levenberg
fails to converge, but we have ignored this issue for the sake of brevity.
6.7.3 Stiff problems ¶ At each time step in Function 6.7.2 , or any implicit IVP solver, a rootfinding iteration of uncertain expense is needed, requiring multiple calls to evaluate the function f \mathbf{f} f . This fact makes the cost of an implicit method much greater on a per-step basis than for an explicit one. Given this drawback, you are justified to wonder whether implicit methods are ever competitive! The answer is emphatically yes, as Demo 6.7.2 demonstrates.
StiffnessThe following simple ODE uncovers a surprise.
ivp = ODEProblem((u, p, t) -> u^2 - u^3, 0.005, (0, 400.0))
We will solve the problem first with the implicit AM2 method using n = 200 n=200 n = 200 steps.
tI, uI = FNC.am2(ivp, 200)
plot(tI, uI, label="AM2",
xlabel=L"t", ylabel=L"u(t)", leg=:bottomright)
Now we repeat the process using the explicit AB4 method.
tE, uE = FNC.ab4(ivp, 200)
scatter!(tE, uE, m=3, label="AB4", ylim=[-4, 2])
Once the solution starts to take off, the AB4 result goes catastrophically wrong.
We hope that AB4 will converge in the limit h → 0 h\to 0 h → 0 , so let’s try using more steps.
plt = scatter(tI, uI, label="AM2, n=200", m=3,
xlabel=L"t", ylabel=L"u(t)", leg=:bottomright)
for n in [1000, 1600]
tE, uE = FNC.ab4(ivp, n)
plot!(tE, uE, label="AM4, n=$n")
end
plt
So AB4, which is supposed to be more accurate than AM2, actually needs something like 8 times as many steps to get a reasonable-looking answer!
StiffnessThe following simple ODE uncovers a surprise.
ivp = ode;
ivp.ODEFcn = @(t, u, p) u^2 - u^3;
ivp.InitialValue = 0.005;
We will solve the problem first with the implicit AM2 method using n = 200 n=200 n = 200 steps.
[tI, uI] = am2(ivp, 0, 400, 200);
clf
plot(tI, uI)
xlabel("t"); ylabel("u(t)")
Now we repeat the process using the explicit AB4 method.
[tE, uE] = ab4(ivp, 0, 400, 200);
hold on
plot(tE, uE, '.', 'markersize', 8)
ylim([-5, 3])
legend("AM2", "AB4")
Once the solution starts to take off, the AB4 result goes catastrophically wrong.
format short e
uE(105:111)
We hope that AB4 will converge in the limit h → 0 h\to 0 h → 0 , so let’s try using more steps.
clf, plot(tI, uI, '.', 'markersize', 10)
hold on
[tE, uE] = ab4(ivp, 0, 400, 1000);
plot(tE, uE)
[tE, uE] = ab4(ivp, 0, 400, 1600);
plot(tE, uE)
legend("AM2, n=200", "AB4, n=1000", "AB4, n=1600")
So AB4, which is supposed to be more accurate than AM2, actually needs something like 8 times as many steps to get a reasonable-looking answer!
StiffnessThe following simple ODE uncovers a surprise.
f = lambda t, u: u**2 - u**3
u0 = array([0.005])
tspan = [0, 400]
We will solve the problem first with the implicit AM2 method using n = 200 n=200 n = 200 steps.
tI, uI = FNC.am2(f, [0.0, 400.0], u0, 200)
fig, ax = subplots()
ax.plot(tI, uI[0], label="AM2")
xlabel("$t$"), ylabel("$y(t)$");
So far, so good. Now we repeat the process using the explicit AB4 method.
tE, uE = FNC.ab4(f, [0.0, 400.0], u0, 200)
ax.scatter(tE, uE[0], label="AB4")
ax.set_ylim([-4, 2]), ax.legend()
fig
Once the solution starts to take off, the AB4 result goes catastrophically wrong.
We hope that AB4 will converge in the limit h → 0 h\to 0 h → 0 , so let’s try using more steps.
plot(tI, uI[0], color="k", label="AM2")
tE, uE = FNC.ab4(f, [0, 400], u0, 1000)
plot(tE, uE[0], ".-", label="AM4, n=1000")
tE, uE = FNC.ab4(f, [0, 400], u0, 1600)
plot(tE, uE[0], ".-", label="AM4, n=1600")
xlabel("$t$"), ylabel("$u(t)$")
legend()
So AB4, which is supposed to be more accurate than AM2, actually needs something like 8 times as many steps to get a reasonable-looking answer!
Although the result of Demo 6.7.2 may seem counter-intuitive, there is no contradiction. A fourth-order explicit formula is more accurate than a second-order implicit one, in the limit h → 0 h\to 0 h → 0 . But there is another limit to consider, t → ∞ t\to \infty t → ∞ with h h h fixed, and in this one the implicit method wins.
Problems for which implicit methods are much more efficient than explicit counterparts are called stiff . A complete mathematical description will wait for Chapter 11, but a sure sign of stiffness is the presence of phenomena on widely different time scales. In Demo 6.7.2 , for instance, there are two slow periods during which the solution changes very little, interrupted by a very fast transition in the state. An explicit method “thinks” that the step size must always be dictated by the time scale of the fast transition, whereas an implicit method can take large steps during the slow periods.
6.7.4 Adaptivity ¶ As with RK methods, we can run two time stepping methods simultaneously in order to estimate the error and adjust the step size. For example, we could pair AB3 with AB4 as practically no cost because the methods differ only in how they include known information from the recent past. The more accurate AB4 value should allow an accurate estimate of the local error in the AB3 value, and so on.
Because multistep methods rely on the solution history, though, changing the step size is more algebraically complicated than for RK methods. If h h h is changed, then the historical values u i − 1 , u i − 2 … \mathbf{u}_{i-1},\mathbf{u}_{i-2}\ldots u i − 1 , u i − 2 … and f i − 1 , f i − 2 … \mathbf{f}_{i-1},\mathbf{f}_{i-2}\ldots f i − 1 , f i − 2 … are no longer given at the right moments in time to apply the iteration formula. A typical remedy is to use interpolation to re-evaluate the historical values at the appropriate times. The details are important but not especially illuminating, and we do not give them here.
6.7.5 Exercises ¶ ⌨ For each IVP, solve the problem using Function 6.7.1 with n = 100 n=100 n = 100 , and plot the solution and the error u − u ^ u-\hat{u} u − u ^ on separate plots.
(a) u ′ = − 2 t u , 0 ≤ t ≤ 2 , u ( 0 ) = 2 ; u ^ ( t ) = 2 e − t 2 u' = -2t u, \ 0 \le t \le 2, \ u(0) = 2;\ \hat{u}(t) = 2e^{-t^2} u ′ = − 2 t u , 0 ≤ t ≤ 2 , u ( 0 ) = 2 ; u ^ ( t ) = 2 e − t 2
(b) u ′ = u + t , 0 ≤ t ≤ 1 , u ( 0 ) = 2 ; u ^ ( t ) = 1 − t + e t u' = u + t, \ 0 \le t \le 1, \ u(0) = 2;\ \hat{u}(t) = 1-t+e^t u ′ = u + t , 0 ≤ t ≤ 1 , u ( 0 ) = 2 ; u ^ ( t ) = 1 − t + e t
(c) u ′ = x 2 / [ u ( 1 + x 3 ) ] , 0 ≤ x ≤ 3 , u ( 0 ) = 1 ; u ^ ( x ) = [ 1 + ( 2 / 3 ) ln ( 1 + x 3 ) ] 1 / 2 u' = x^2/[u(1+x^3)],\ 0 \le x \le 3, \ u(0) =1;\ \hat{u}(x) =[1+(2/3)\ln (1+x^3)]^{1/2} u ′ = x 2 / [ u ( 1 + x 3 )] , 0 ≤ x ≤ 3 , u ( 0 ) = 1 ; u ^ ( x ) = [ 1 + ( 2/3 ) ln ( 1 + x 3 ) ] 1/2
(d) u ′ ′ + 9 u = 9 t , 0 < t < 2 π , u ( 0 ) = 1 , u ′ ( 0 ) = 1 ; u ^ ( t ) = t + cos ( 3 t ) u''+ 9u = 9t, \: 0< t< 2\pi, \: u(0) =1,\: u'(0) = 1; \: \hat{u}(t) = t+\cos (3t) u ′′ + 9 u = 9 t , 0 < t < 2 π , u ( 0 ) = 1 , u ′ ( 0 ) = 1 ; u ^ ( t ) = t + cos ( 3 t )
(e) u ′ ′ + 9 u = sin ( 2 t ) , 0 < t < 2 π , u ( 0 ) = 2 , u ′ ( 0 ) = 1 u''+ 9u = \sin(2t), \: 0< t< 2\pi, \: u(0) =2,\: u'(0) = 1 u ′′ + 9 u = sin ( 2 t ) , 0 < t < 2 π , u ( 0 ) = 2 , u ′ ( 0 ) = 1 ;
u ^ ( t ) = ( 1 / 5 ) sin ( 3 t ) + 2 cos ( 3 t ) + ( 1 / 5 ) sin ( 2 t ) \quad \hat{u}(t) = (1/5) \sin(3t) + 2 \cos (3t)+ (1/5) \sin (2t) u ^ ( t ) = ( 1/5 ) sin ( 3 t ) + 2 cos ( 3 t ) + ( 1/5 ) sin ( 2 t )
(f) u ′ ′ − 9 u = 9 t 0 < t < 1 , u ( 0 ) = 2 , u ′ ( 0 ) = − 1 ; u ^ ( t ) = e 3 t + e − 3 t − t u''- 9u = 9t \: 0< t< 1, \: u(0) =2,\: u'(0) = -1; \: \hat{u}(t) = e^{3t} + e^{-3t}-t u ′′ − 9 u = 9 t 0 < t < 1 , u ( 0 ) = 2 , u ′ ( 0 ) = − 1 ; u ^ ( t ) = e 3 t + e − 3 t − t
(g) u ′ ′ + 4 u ′ + 4 u = t , 0 < t < 4 , u ( 0 ) = 1 , u ′ ( 0 ) = 3 / 4 ; u ^ ( t ) = ( 3 t + 5 / 4 ) e − 2 t + ( t − 1 ) / 4 u''+ 4u'+ 4u = t, \: 0< t< 4, \: u(0) =1,\: u'(0) = 3/4; \: \hat{u}(t) = (3t+5/4)e^{-2t} + (t-1)/4 u ′′ + 4 u ′ + 4 u = t , 0 < t < 4 , u ( 0 ) = 1 , u ′ ( 0 ) = 3/4 ; u ^ ( t ) = ( 3 t + 5/4 ) e − 2 t + ( t − 1 ) /4
(h) x 2 u ′ ′ + 5 x u ′ + 4 u = 0 , 1 < x < e 2 , u ( 1 ) = 1 , u ′ ( 1 ) = − 1 ; u ^ ( x ) = x − 2 ( 1 + ln x ) x^2 u'' +5xu' + 4u = 0,\: 1<x<e^2, \: u(1) =1, \: u'(1) = -1; \: \hat{u}(x) = x^{-2}( 1 + \ln x) x 2 u ′′ + 5 x u ′ + 4 u = 0 , 1 < x < e 2 , u ( 1 ) = 1 , u ′ ( 1 ) = − 1 ; u ^ ( x ) = x − 2 ( 1 + ln x )
(i) 2 x 2 u ′ ′ + 3 x u ′ − u = 0 , 1 < x < 16 , u ( 1 ) = 4 , u ′ ( 1 ) = − 1 2 x^2 u'' +3xu' - u = 0,\: 1<x<16, \: u(1) =4, \: u'(1) = -1 2 x 2 u ′′ + 3 x u ′ − u = 0 , 1 < x < 16 , u ( 1 ) = 4 , u ′ ( 1 ) = − 1 ;
u ^ ( x ) = 2 ( x 1 / 2 + x − 1 ) \quad \hat{u}(x) = 2(x^{1/2} + x^{-1}) u ^ ( x ) = 2 ( x 1/2 + x − 1 )
(j) x 2 u ′ ′ − x u ′ + 2 u = 0 , 1 < x < e π , u ( 1 ) = 3 , u ′ ( 1 ) = 4 x^2 u'' -xu' + 2u = 0,\: 1<x<e^{\pi}, \: u(1) =3, \: u'(1) = 4 x 2 u ′′ − x u ′ + 2 u = 0 , 1 < x < e π , u ( 1 ) = 3 , u ′ ( 1 ) = 4 ;
u ^ ( x ) = x [ 3 cos ( ln x ) + sin ( ln x ) ] \quad \hat{u}(x) = x \left[ 3 \cos \left( \ln x \right)+\sin \left( \ln x \right) \right] u ^ ( x ) = x [ 3 cos ( ln x ) + sin ( ln x ) ]
⌨ For each IVP in Exercise 1, use Function 6.7.1 for n = 10 ⋅ 2 d n=10\cdot2^d n = 10 ⋅ 2 d and d = 1 , … , 10 d=1,\ldots,10 d = 1 , … , 10 . Make a log-log convergence plot for the final time error ∣ u n − u ^ ( t n ) ∣ |u_n-\hat{u}(t_n)| ∣ u n − u ^ ( t n ) ∣ versus n n n , and add a straight line indicating fourth-order convergence.
⌨ Repeat Exercise 1 above using Function 6.7.2 .
⌨ Repeat Exercise 2 above using Function 6.7.2 and comparing to second-order rather than fourth-order convergence.
⌨ Using Function 6.7.2 as a model, write a function bd2
that applies the BD2 method to solve an IVP. Test the convergence of your function on one of the IVPs in Exercise 1 above.
⌨ For double-precision purposes, the exact solution of the IVP in Demo 6.7.2 satisfies u ^ ( 400 ) = 1 \hat{u}(400)=1 u ^ ( 400 ) = 1 .
(a) Use Function 6.7.1 with n = 600 , 800 , 1000 , … , 2000 n=600,800,1000,\ldots,2000 n = 600 , 800 , 1000 , … , 2000 and make a log-log convergence plot of the error ∣ u n − 1 ∣ |u_n-1| ∣ u n − 1∣ as a function of n n n .
(b) Repeat part (a) using Function 6.7.2 .
Consider the IVP
u ′ ( t ) = A u ( t ) , A = [ 0 − 4 4 0 ] , u ( 0 ) = [ 1 0 ] . \mathbf{u}'(t) = \mathbf{A} \mathbf{u}(t), \quad \mathbf{A}=
\begin{bmatrix}
0&-4\\4&0
\end{bmatrix}, \quad \mathbf{u}(0) =
\begin{bmatrix}
1\\0
\end{bmatrix}. u ′ ( t ) = Au ( t ) , A = [ 0 4 − 4 0 ] , u ( 0 ) = [ 1 0 ] . (a) ✍ Define E ( t ) = ∥ u ( t ) ∥ 2 2 E(t) = \bigl\|\mathbf{u}(t)\bigr\|_2^2 E ( t ) = ∥ ∥ u ( t ) ∥ ∥ 2 2 . Show that E ( t ) E(t) E ( t ) is constant. (Hint: differentiate u T u \mathbf{u}^T\mathbf{u} u T u with respect to time and simplify it.)
(b) ⌨ Use Function 6.7.1 to solve the IVP for t ∈ [ 0 , 20 ] t\in[0,20] t ∈ [ 0 , 20 ] with n = 100 n=100 n = 100 and n = 150 n=150 n = 150 . Plot ∣ E ( t ) − E ( 0 ) ∣ |E(t)-E(0)| ∣ E ( t ) − E ( 0 ) ∣ versus time for both solutions on a single log-linear graph. You should see exponential growth in time. (In this regime, AB4 is acting unstably in a sense discussed in section-diffusion-absstab .)
(c) ⌨ Repeat part (b) with n = 400 n=400 n = 400 and n = 600 n=600 n = 600 , but on a linear-linear plot. Now you should see only linear growth of ∣ E ( t ) − E ( 0 ) ∣ |E(t)-E(0)| ∣ E ( t ) − E ( 0 ) ∣ . (In this regime, AB4 is fully stable.)
(d) ⌨ Repeat part (b) with AM2 instead of AB4, on a linear-linear plot. You will find that AM2 conserves energy, just like the exact solution.
⌨ (a) Modify Function 6.7.1 to implement the AB2 method.
(b) Repeat part (b) of the preceding exercise, using AB2 in place of AB4.
(c) Repeat part (c) of the preceding exercise, using AB2 in place of AB4.
⌨ (a) Modify Function 6.7.2 to implement the backward Euler (AM1) method.
(b) Repeat part (d) of Exercise 7 above, using AM1 in place of AM2 and n = 400 , 800 n=400,800 n = 400 , 800 . Does the AM1 method conserve energy?