We come now to one of the major and most-used types of methods for initial-value problems: Runge–Kutta (RK) methods.[1] They are one-step methods in the sense of (6.2.7) , though they are not often written in that form. RK methods boost the accuracy past first order by evaluating the ODE function f ( t , u ) f(t,u) f ( t , u ) more than once per time step.
6.4.1 A second-order method ¶ Consider a series expansion of the exact solution to u ′ = f ( t , u ) u'=f(t,u) u ′ = f ( t , u ) ,
u ^ ( t i + 1 ) = u ^ ( t i ) + h u ^ ′ ( t i ) + 1 2 h 2 u ^ ′ ′ ( t i ) + O ( h 3 ) . \hat{u}(t_{i+1}) = \hat{u}(t_i) + h \hat{u}'(t_i) + \frac{1}{2}h^2 \hat{u}''(t_i) + O(h^3) . u ^ ( t i + 1 ) = u ^ ( t i ) + h u ^ ′ ( t i ) + 2 1 h 2 u ^ ′′ ( t i ) + O ( h 3 ) . If we replace u ^ ′ \hat{u}' u ^ ′ by f f f and keep only the first two terms on the right-hand side, we would obtain the Euler method. To get more accuracy we will need to compute or estimate the third term as well. Note that
u ^ ′ ′ = f ′ = d f d t = ∂ f ∂ t + ∂ f ∂ u d u d t = f t + f u f , \hat{u}'' = f' = \frac{d f}{d t} = \frac{\partial f}{\partial t} + \frac{\partial f}{\partial u} \frac{d u}{d t} = f_t + f_u f, u ^ ′′ = f ′ = d t df = ∂ t ∂ f + ∂ u ∂ f d t d u = f t + f u f , where we have applied the multidimensional chain rule to the derivative, because both of the arguments of f f f depend on t t t . Using this expression in (6.4.1) , we obtain
u ^ ( t i + 1 ) = u ^ ( t i ) + h [ f ( t i , u ^ ( t i ) ) + h 2 f t ( t i , u ^ ( t i ) ) + h 2 f ( t i , u ^ ( t i ) ) f u ( t i , u ^ ( t i ) ) ] + O ( h 3 ) . \hat{u}(t_{i+1}) = \hat{u}(t_i) + h\left[f\bigl(t_i,\hat{u}(t_i)\bigr) +
\frac{h}{2}f_t\bigl(t_i,\hat{u}(t_i)\bigr) +
\frac{h}{2}f\bigl(t_i,\hat{u}(t_i)\bigr)\,f_u\bigl(t_i,\hat{u}(t_i)\bigr)\right] \\
+ O(h^3). u ^ ( t i + 1 ) = u ^ ( t i ) + h [ f ( t i , u ^ ( t i ) ) + 2 h f t ( t i , u ^ ( t i ) ) + 2 h f ( t i , u ^ ( t i ) ) f u ( t i , u ^ ( t i ) ) ] + O ( h 3 ) . We have no desire to calculate and then code those partial derivatives of f f f directly; an approximate approximation is called for. Observe that
f ( t i + α , u ^ ( t i ) + β ) = f ( t i , u ^ ( t i ) ) + α f t ( t i , u ^ ( t i ) ) + β f u ( t i , u ^ ( t i ) ) + O ( α 2 + ∣ α β ∣ + β 2 ) . f\bigl(t_i+\alpha,\hat{u}(t_i)+\beta\bigr) = f\bigl(t_i,\hat{u}(t_i)\bigr) +
\alpha f_t\bigl(t_i,\hat{u}(t_i)\bigr) + \beta f_u\bigl(t_i,\hat{u}(t_i)\bigr) +
O\bigl(\alpha^2 + |\alpha\beta| + \beta^2\bigr). f ( t i + α , u ^ ( t i ) + β ) = f ( t i , u ^ ( t i ) ) + α f t ( t i , u ^ ( t i ) ) + β f u ( t i , u ^ ( t i ) ) + O ( α 2 + ∣ α β ∣ + β 2 ) . Matching this expression to the term in brackets in (6.4.3) , it seems natural to select α = h / 2 \alpha = h/2 α = h /2 and β = 1 2 h f ( t i , u ^ ( t i ) ) \beta = \frac{1}{2}h f\bigl(t_i,\hat{u}(t_i)\bigr) β = 2 1 h f ( t i , u ^ ( t i ) ) . Doing so, we find
u ^ ( t i + 1 ) = u ^ ( t i ) + h [ f ( t i + α , u ^ ( t i ) + β ) ] + O ( h α 2 + h ∣ α β ∣ + h β 2 + h 3 ) . \hat{u}(t_{i+1}) = \hat{u}(t_i) + h\left[f\bigl(t_i+\alpha,\hat{u}(t_i)+\beta\bigr)\right] +
O(h\alpha^2 + h|\alpha \beta| + h\beta^2 + h^3). u ^ ( t i + 1 ) = u ^ ( t i ) + h [ f ( t i + α , u ^ ( t i ) + β ) ] + O ( h α 2 + h ∣ α β ∣ + h β 2 + h 3 ) . Truncation of the series here results in a new one-step method.
Thanks to the definitions above of α and β, the omitted terms are of size
O ( h α 2 + h ∣ α β ∣ + h β 2 + h 3 ) = O ( h 3 ) . O(h\alpha^2 + h|\alpha \beta| + h\beta^2 + h^3) = O(h^3). O ( h α 2 + h ∣ α β ∣ + h β 2 + h 3 ) = O ( h 3 ) . Therefore h τ i + 1 = O ( h 3 ) h\tau_{i+1}=O(h^3) h τ i + 1 = O ( h 3 ) , and the order of accuracy of improved Euler is two.
6.4.2 Implementation ¶ Runge–Kutta methods are called multistage methods. We can see why if we interpret (6.4.6) from the inside out. In the first stage, the method takes an Euler half-step to time t i + 1 2 h t_i+\frac{1}{2}h t i + 2 1 h :
k 1 = h f ( t i , u i ) , v = u i + 1 2 k 1 . \begin{split}
k_1 &= h f(t_i,u_i), \\
v &= u_i + \tfrac{1}{2}k_1.
\end{split} k 1 v = h f ( t i , u i ) , = u i + 2 1 k 1 . The second stage employs an Euler-style strategy over the whole time step, but using the value from the first stage to get the slope:
k 2 = h f ( t i + 1 2 h , v ) , u i + 1 = u i + k 2 . \begin{split}
k_2 &= h f\left(t_i+\tfrac{1}{2}h,v\right),\\
{u}_{i+1} &= u_i + k_2.
\end{split} k 2 u i + 1 = h f ( t i + 2 1 h , v ) , = u i + k 2 . Our implementation of IE2 is shown in Function 6.4.1 .
Improved Euler method for an IVP
Improved Euler method for an IVP1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
function [t, u] = ie2(ivp, a, b, n)
% IE2 Improved Euler method for an IVP.
% Input:
% ivp structure defining the IVP
% a, b endpoints of time interval (scalars)
% n number of time steps (integer)
% Output:
% t selected nodes (vector, length n+1)
% u solution values (array, m by (n+1))
du_dt = ivp.ODEFcn;
u0 = ivp.InitialValue;
p = ivp.Parameters;
% Define time discretization.
h = (b - a) / n;
t = a + h * (0:n)';
% Initialize solution array.
u = zeros(length(u0), n+1);
u(:, 1) = u0(:);
% Time stepping.
for i = 1:n
uhalf = u(:, i) + h/2 * du_dt(t(i), u(:, i), p);
u(:, i+1) = u(:, i) + h * du_dt(t(i) + h/2, uhalf, p);
end
Improved Euler method for an IVP1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def ie2(dudt, tspan, u0, n):
"""
ie2(dudt,tspan,u0,n)
Apply the Improved Euler method to solve the vector-valued IVP u'=`dudt`(u,p,t) over the
interval `tspan` with u(`tspan[1]`)=`u0`, using `n` subintervals/steps. Returns a vector
of times and a vector of solution values/vectors.
"""
# Time discretization.
a, b = tspan
h = (b - a) / n
t = np.linspace(a, b, n + 1)
# Initialize output.
u = np.tile(np.array(u0), (n+1, 1))
# Time stepping.
for i in range(n):
uhalf = u[i] + h / 2 * dudt(t[i], u[i])
u[i+1] = u[i] + h * dudt(t[i] + h / 2, uhalf)
return t, u.T
6.4.3 More Runge–Kutta methods ¶ While it is interesting to interpret IE2 as a pair of Euler-like steps, the Taylor series derivation is the only way to see that it will be more accurate than Euler, and it is also the path to deriving other methods. Moving to higher orders of accuracy requires introducing additional stages, each having free parameters so that more terms in the series may be matched. The amount of algebra grows rapidly in size and complexity, though there is a sophisticated theory for keeping track of it. We do not give the derivation details.
A generic s s s -stage RK method takes the form
k 1 = h f ( t i , u i ) , k 2 = h f ( t i + c 1 h , u i + a 11 k 1 ) , k 3 = h f ( t i + c 2 h , u i + a 21 k 1 + a 22 k 2 ) , ⋮ k s = h f ( t i + c s − 1 h , u i + a s − 1 , 1 k 1 + ⋯ + a s − 1 , s − 1 k s − 1 ) , u i + 1 = u i + b 1 k 1 + ⋯ + b s k s . \begin{split}
k_1 &= h f(t_i,u_i),\\
k_2 &= h f(t_i+c_1h,u_i + a_{11}k_1),\\
k_3 &= h f(t_i+c_2h, u_i + a_{21}k_1 + a_{22}k_2),\\
&\vdots\\
k_s &= h f(t_i + c_{s-1}h, u_i + a_{s-1,1}k_1 + \cdots +
a_{s-1,s-1}k_{s-1}),\\
\mathbf{u}_{i+1} &= u_i + b_1k_1 + \cdots + b_s k_s.
\end{split} k 1 k 2 k 3 k s u i + 1 = h f ( t i , u i ) , = h f ( t i + c 1 h , u i + a 11 k 1 ) , = h f ( t i + c 2 h , u i + a 21 k 1 + a 22 k 2 ) , ⋮ = h f ( t i + c s − 1 h , u i + a s − 1 , 1 k 1 + ⋯ + a s − 1 , s − 1 k s − 1 ) , = u i + b 1 k 1 + ⋯ + b s k s . This recipe is completely determined by the number of stages s s s and the constants a i j a_{ij} a ij , b j b_j b j , and c i c_i c i . Often an RK method is presented as just a table of these numbers, as in
0 c 1 a 11 c 2 a 21 a 22 ⋮ ⋮ ⋱ c s − 1 a s − 1 , 1 ⋯ a s − 1 , s − 1 b 1 b 2 ⋯ b s − 1 b s \begin{array}{r|ccccc}
0 & & & & & \\
c_1 & a_{11} & & &\\
c_2 & a_{21} & a_{22} & & &\\
\vdots & \vdots & & \ddots & &\\
c_{s-1} & a_{s-1,1} & \cdots & & a_{s-1,s-1}&\\[1mm] \hline
\rule{0pt}{2.25ex} & b_1 & b_2 & \cdots & b_{s-1} & b_s
\end{array} 0 c 1 c 2 ⋮ c s − 1 a 11 a 21 ⋮ a s − 1 , 1 b 1 a 22 ⋯ b 2 ⋱ ⋯ a s − 1 , s − 1 b s − 1 b s For example, IE2 is given by
0 1 2 1 2 0 1 \begin{array}{r|cc}
\rule{0pt}{2.75ex}0 & & \\
\rule{0pt}{2.75ex}\frac{1}{2} & \frac{1}{2} &\\[1mm] \hline
\rule{0pt}{2.75ex}& 0 & 1
\end{array} 0 2 1 2 1 0 1 Here are two more two-stage, second-order methods, modified Euler and Heun’s method , respectively:
0 1 1 1 2 1 2 0 2 3 2 3 1 4 3 4 \begin{array}{r|cc}
\rule{0pt}{2.75ex}0 & & \\
\rule{0pt}{2.75ex}1 & 1 &\\[1mm] \hline
\rule{0pt}{2.75ex}& \frac{1}{2} & \frac{1}{2}
\end{array}
\qquad \qquad
\begin{array}{r|cc}
\rule{0pt}{2.75ex} 0 & & \\
\rule{0pt}{2.75ex} \frac{2}{3} & \frac{2}{3} &\\[1mm] \hline
\rule{0pt}{2.75ex} & \frac{1}{4} & \frac{3}{4}
\end{array} 0 1 1 2 1 2 1 0 3 2 3 2 4 1 4 3 The most commonly used RK method, and perhaps the most popular IVP method of all, is the fourth-order one given by
0 1 2 1 2 1 2 0 1 2 1 0 0 1 1 6 1 3 1 3 1 6 \begin{array}{r|cccc}
\rule{0pt}{2.75ex}0 & & & & \\
\rule{0pt}{2.75ex}\frac{1}{2} & \frac{1}{2} & & &\\
\rule{0pt}{2.75ex}\frac{1}{2} & 0 & \frac{1}{2} & &\\
\rule{0pt}{2.75ex}1 & 0 & 0 & 1\\[1mm] \hline
\rule{0pt}{2.75ex}& \frac{1}{6} & \frac{1}{3} & \frac{1}{3} & \frac{1}{6}
\end{array} 0 2 1 2 1 1 2 1 0 0 6 1 2 1 0 3 1 1 3 1 6 1 This formula is often referred to as the fourth-order RK method, even though there are many others, and we refer to it as RK4 . Written out, the recipe is as follows.
Our implementation is given in Function 6.4.2 .
Fourth-order Runge-Kutta for an IVP
Fourth-order Runge-Kutta for an IVP1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
function [t, u] = rk4(ivp, a, b, n)
% RK4 Fourth-order Runge-Kutta for an IVP.
% Input:
% ivp structure defining the IVP
% a, b endpoints of time interval (scalars)
% n number of time steps (integer)
% Output:
% t selected nodes (vector, length n+1)
% u solution values (array, m by (n+1))
du_dt = ivp.ODEFcn;
u0 = ivp.InitialValue;
p = ivp.Parameters;
% Define time discretization.
h = (b - a) / n;
t = a + h * (0:n)';
% Initialize solution array.
u = zeros(length(u0), n+1);
u(:, 1) = u0(:);
% Time stepping.
for i = 1:n
k1 = h * du_dt( t(i), u(:, i) , p);
k2 = h * du_dt( t(i) + h/2, u(:, i) + k1/2, p );
k3 = h * du_dt( t(i) + h/2, u(:, i) + k2/2, p );
k4 = h * du_dt( t(i) + h, u(:, i) + k3 , p);
u(:, i+1) = u(:, i) + (k1 + 2*(k2 + k3) + k4) / 6;
end
Fourth-order Runge-Kutta for an IVP1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
def rk4(dudt, tspan, u0, n):
"""
rk4(dudt,tspan,u0,n)
Apply "the" Runge-Kutta 4th order method to solve the vector-valued IVP u'=`dudt`(u,p,t)
over the interval `tspan` with u(`tspan[1]`)=`u0`, using `n` subintervals/steps.
Return a vector of times and a vector of solution values/vectors.
"""
# Time discretization.
a, b = tspan
h = (b - a) / n
t = np.linspace(a, b, n + 1)
# Initialize output.
u = np.tile(np.array(u0), (n+1, 1))
# Time stepping.
for i in range(n):
k1 = h * dudt(t[i], u[i])
k2 = h * dudt(t[i] + h / 2, u[i] + k1 / 2)
k3 = h * dudt(t[i] + h / 2, u[i] + k2 / 2)
k4 = h * dudt(t[i] + h, u[i] + k3)
u[i+1] = u[i] + (k1 + 2 * (k2 + k3) + k4) / 6
return t, u.T
Convergence of Runge–Kutta methodsWe solve the IVP u ′ = sin [ ( u + t ) 2 ] u'=\sin[(u+t)^2] u ′ = sin [( u + t ) 2 ] over 0 ≤ t ≤ 4 0\le t \le 4 0 ≤ t ≤ 4 , with u ( 0 ) = − 1 u(0)=-1 u ( 0 ) = − 1 .
f(u, p, t) = sin((t + u)^2)
tspan = (0.0, 4.0)
u₀ = -1.0
ivp = ODEProblem(f, u₀, tspan)
We use a DifferentialEquations
solver to construct an accurate approximation to the exact solution.
u_ref = solve(ivp, Tsit5(), reltol=1e-14, abstol=1e-14);
Now we perform a convergence study of our two Runge–Kutta implementations.
n = [round(Int, 2 * 10^k) for k in 0:0.5:3]
err_IE2, err_RK4 = [], []
for n in n
t, u = FNC.ie2(ivp, n)
push!(err_IE2, maximum(@.abs(u_ref(t) - u)))
t, u = FNC.rk4(ivp, n)
push!(err_RK4, maximum(@.abs(u_ref(t) - u)))
end
pretty_table([n err_IE2 err_RK4], header=["n", "IE2 error", "RK4 error"])
The amount of computational work at each time step is assumed to be proportional to the number of stages. Let’s compare on an apples-to-apples basis by using the number of f f f -evaluations on the horizontal axis.
plot([2n 4n], [err_IE2 err_RK4], m=3, label=["IE2" "RK4"],
xaxis=(:log10, "f-evaluations"), yaxis=(:log10, "inf-norm error"),
title="Convergence of RK methods", leg=:bottomleft)
plot!(2n, 1e-5 * (n / n[end]) .^ (-2), l=:dash, label=L"O(n^{-2})")
plot!(4n, 1e-10 * (n / n[end]) .^ (-4), l=:dash, label=L"O(n^{-4})")
The fourth-order variant is more efficient in this problem over a wide range of accuracy.
Convergence of Runge–Kutta methodsWe solve the IVP u ′ = sin [ ( u + t ) 2 ] u'=\sin[(u+t)^2] u ′ = sin [( u + t ) 2 ] over 0 ≤ t ≤ 4 0\le t \le 4 0 ≤ t ≤ 4 , with u ( 0 ) = − 1 u(0)=-1 u ( 0 ) = − 1 .
ivp = ode;
ivp.ODEFcn = @(t, u, p) sin((t + u)^2);
ivp.InitialValue = -1;
a = 0; b = 4;
We use a built-in solver to construct an accurate approximation to the exact solution.
ivp.AbsoluteTolerance = 1e-13;
ivp.RelativeTolerance = 1e-13;
u_ref = solutionFcn(ivp, a, b);
Now we perform a convergence study of our two Runge–Kutta implementations.
n = round(2 * 10.^(0:0.5:3)');
err = zeros(length(n), 2);
for i = 1:length(n)
[t, u] = ie2(ivp, a, b, n(i));
err(i, 1) = norm(u_ref(t) - u, Inf);
[t, u] = rk4(ivp, a, b, n(i));
err(i, 2) = norm(u_ref(t) - u, Inf);
end
disp(table(n, err(:, 1), err(:, 2), variableNames=["n", "IE2 error", "RK4 error"]))
The amount of computational work at each time step is assumed to be proportional to the number of stages. Let’s compare on an apples-to-apples basis by using the number of f f f -evaluations on the horizontal axis.
clf, loglog([2*n 4*n], err, '-o')
hold on
loglog(2*n, 1e-5 * (n / n(end)) .^ (-2), '--')
loglog(4*n, 1e-10 * (n / n(end)) .^ (-4), '--')
xlabel("f-evaluations"); ylabel("inf-norm error")
title("Convergence of RK methods")
legend("IE2", "RK4", "O(n^{-2})", "O(n^{-4})", "location", "southwest")
The fourth-order variant is more efficient in this problem over a wide range of accuracy.
Convergence of Runge–Kutta methodsWe solve the IVP u ′ = sin [ ( u + t ) 2 ] u'=\sin[(u+t)^2] u ′ = sin [( u + t ) 2 ] over 0 ≤ t ≤ 4 0\le t \le 4 0 ≤ t ≤ 4 , with u ( 0 ) = − 1 u(0)=-1 u ( 0 ) = − 1 . We start by getting a reference solution to validate against.
from scipy.integrate import solve_ivp
du_dt = lambda t, u: sin((t + u)**2)
tspan = (0.0, 4.0)
u0 = -1.0
sol = solve_ivp(du_dt, tspan, [u0], dense_output=True, atol=1e-13, rtol=1e-13)
u_ref = sol.sol
Now we perform a convergence study of our two Runge–Kutta implementations.
n = array([int(2 * 10**k) for k in linspace(0, 3, 7)])
err = {"IE2" : [], "RK4" : []}
results = PrettyTable(["n", "IE2 error", "RK4 error"])
for i in range(len(n)):
t, u = FNC.ie2(du_dt, tspan, u0, n[i])
err["IE2"].append( abs(u_ref(4)[0] - u[0][-1]) )
t, u = FNC.rk4(du_dt, tspan, u0, n[i])
err["RK4"].append( abs(u_ref(4)[0] - u[0][-1]) )
results.add_row([n[i], err["IE2"][-1], err["RK4"][-1]])
print(results)
The amount of computational work at each time step is assumed to be proportional to the number of stages. Let’s compare on an apples-to-apples basis by using the number of f f f -evaluations on the horizontal axis.
loglog(2 * n, err["IE2"], "-o", label="IE2")
loglog(4 * n, err["RK4"], "-o", label="RK4")
plot(2 * n, 0.5 * err["IE2"][-1] * (n / n[-1])**(-2), "--", label="2nd order")
plot(4 * n, 0.5 * err["RK4"][-1] * (n / n[-1])**(-4), "--", label="4th order")
xlabel("f-evaluations"), ylabel("inf-norm error")
legend()
title("Convergence of RK methods");
The fourth-order variant is more efficient in this problem over a wide range of accuracy.
6.4.4 Efficiency ¶ As with rootfinding and integration, the usual point of view is that evaluations of f f f are the only significant computations and are therefore to be minimized in number. One of the most important characteristics of a multistage method is that each stage requires an evaluation of f f f ; that is, a single time step of an s s s -stage method requires s s s evaluations of f f f .
The error decreases geometrically as s s s is incremented, so trading a stage for an increase in order is a good deal. But s = 5 s=5 s = 5 , 6, or 7 gives a maximal order of accuracy of s − 1 s-1 s − 1 ; this decreases to s − 2 s-2 s − 2 for s = 8 s=8 s = 8 and s = 9 s=9 s = 9 , etc. Fourth order is considered adequate and the sweet spot for many applications.
6.4.5 Exercises ¶ ✍ For each IVP, write out (possibly using a calculator) the first time step of the improved Euler method with h = 0.2 h=0.2 h = 0.2 .
(a) u ′ = − 2 t u , 0 ≤ t ≤ 2 , u ( 0 ) = 2 ; u ^ ( t ) = 2 e − t 2 u' = -2t u, \ 0 \le t \le 2, \ u(0) = 2;\ \hat{u}(t) = 2e^{-t^2} u ′ = − 2 t u , 0 ≤ t ≤ 2 , u ( 0 ) = 2 ; u ^ ( t ) = 2 e − t 2
(b) u ′ = u + t , 0 ≤ t ≤ 1 , u ( 0 ) = 2 ; u ^ ( t ) = − 1 − t + 3 e t u' = u + t, \ 0 \le t \le 1, \ u(0) = 2;\ \hat{u}(t) = -1-t+3e^t u ′ = u + t , 0 ≤ t ≤ 1 , u ( 0 ) = 2 ; u ^ ( t ) = − 1 − t + 3 e t
(c) ( 1 + x 3 ) u u ′ = x 2 , 0 ≤ x ≤ 3 , u ( 0 ) = 1 ; u ^ ( x ) = [ 1 + ( 2 / 3 ) ln ( 1 + x 3 ) ] 1 / 2 (1+x^3)uu' = x^2,\ 0 \le x \le 3, \ u(0) = 1;\ \hat{u}(x) = [1+(2/3)\ln (1+x^3)]^{1/2} ( 1 + x 3 ) u u ′ = x 2 , 0 ≤ x ≤ 3 , u ( 0 ) = 1 ; u ^ ( x ) = [ 1 + ( 2/3 ) ln ( 1 + x 3 ) ] 1/2
✍ Use the modified Euler method to solve the problems in the preceding exercise.
⌨ Modify Function 6.4.2 to implement the modified Euler method. Test your function on the IVP in part (a) of Exercise 1 by solving with n = 30 , 60 , 90 , … , 300 n=30,60,90,\ldots,300 n = 30 , 60 , 90 , … , 300 and plotting the convergence of the error at the final time together with a line showing O ( n − 2 ) O(n^{-2}) O ( n − 2 ) .
✍ Use Heun’s method to solve the problems in Exercise 1 above .
⌨ Modify Function 6.4.2 to implement Heun’s method. Test your function on the IVP in part (a) of Exercise 1 by solving with n = 30 , 60 , 90 , … , 300 n=30,60,90,\ldots,300 n = 30 , 60 , 90 , … , 300 and plotting the convergence of the error at the final time together with a line showing O ( n − 2 ) O(n^{-2}) O ( n − 2 ) .
✍ Use RK4 to solve the problems in Exercise 1 above .
✍ Using (6.4.3) and (6.4.4) , show that the modified Euler method has order of accuracy at least 2.
✍ Using (6.4.3) and (6.4.4) , show that Heun’s method has order of accuracy at least 2.
⌨ For each IVP, compute the solution using Function 6.4.2 . (i) Plot the solution for n = 300 n=300 n = 300 . (ii) For n = 100 , 200 , 300 , … , 1000 n=100,200,300,\ldots,1000 n = 100 , 200 , 300 , … , 1000 , compute the error at the final time and make a log-log convergence plot, including a reference line for fourth-order convergence.
(a) u ′ ′ + 9 u = 9 t , 0 < t < 2 π , u ( 0 ) = 1 , u ′ ( 0 ) = 1 ; u ^ ( t ) = t + cos ( 3 t ) u''+ 9u = 9t, \: 0< t< 2\pi, \: u(0) = 1,\: u'(0) = 1; \: \hat{u}(t) = t+\cos (3t) u ′′ + 9 u = 9 t , 0 < t < 2 π , u ( 0 ) = 1 , u ′ ( 0 ) = 1 ; u ^ ( t ) = t + cos ( 3 t )
(b) u ′ ′ + 9 u = sin ( 2 t ) , 0 < t < 2 π , u ( 0 ) = 2 , u ′ ( 0 ) = 1 u''+ 9u = \sin(2t), \: 0< t< 2\pi, \: u(0) = 2,\: u'(0) = 1 u ′′ + 9 u = sin ( 2 t ) , 0 < t < 2 π , u ( 0 ) = 2 , u ′ ( 0 ) = 1 ;
u ^ ( t ) = ( 1 / 5 ) sin ( 3 t ) + 2 cos ( 3 t ) + ( 1 / 5 ) sin ( 2 t ) \quad \hat{u}(t) = (1/5) \sin(3t) + 2 \cos (3t)+ (1/5) \sin (2t) u ^ ( t ) = ( 1/5 ) sin ( 3 t ) + 2 cos ( 3 t ) + ( 1/5 ) sin ( 2 t )
(c) u ′ ′ − 9 u = 9 t , 0 < t < 1 , u ( 0 ) = 2 , u ′ ( 0 ) = − 1 ; u ^ ( t ) = e 3 t + e − 3 t − t u''- 9u = 9t, \: 0< t< 1, \: u(0) = 2,\: u'(0) = -1; \: \hat{u}(t) = e^{3t} + e^{-3t}-t u ′′ − 9 u = 9 t , 0 < t < 1 , u ( 0 ) = 2 , u ′ ( 0 ) = − 1 ; u ^ ( t ) = e 3 t + e − 3 t − t
(d) u ′ ′ + 4 u ′ + 4 u = t , 0 < t < 4 , u ( 0 ) = 1 , u ′ ( 0 ) = 3 / 4 ; u ^ ( t ) = ( 3 t + 5 / 4 ) e − 2 t + ( t − 1 ) / 4 u''+ 4u'+ 4u = t, \: 0< t< 4, \: u(0) = 1,\: u'(0) = 3/4; \: \hat{u}(t) = (3t+5/4)e^{-2t} + (t-1)/4 u ′′ + 4 u ′ + 4 u = t , 0 < t < 4 , u ( 0 ) = 1 , u ′ ( 0 ) = 3/4 ; u ^ ( t ) = ( 3 t + 5/4 ) e − 2 t + ( t − 1 ) /4
(e) x 2 y ′ ′ + 5 x y ′ + 4 y = 0 , 1 < x < e 2 , y ( 1 ) = 1 , y ′ ( 1 ) = − 1 , y ^ ( x ) = x − 2 ( 1 + ln x ) x^2 y'' +5xy' + 4y = 0,\: 1<x<e^2, \: y(1) = 1, \: y'(1) = -1, \: \hat{y}(x) = x^{-2}( 1 + \ln x) x 2 y ′′ + 5 x y ′ + 4 y = 0 , 1 < x < e 2 , y ( 1 ) = 1 , y ′ ( 1 ) = − 1 , y ^ ( x ) = x − 2 ( 1 + ln x )
(f) 2 x 2 y ′ ′ + 3 x y ′ − y = 0 , 1 < x < 16 , y ( 1 ) = 4 , y ′ ( 1 ) = − 1 , y ^ ( x ) = 2 ( x 1 / 2 + x − 1 ) 2 x^2 y'' +3xy' - y = 0,\: 1<x<16, \: y(1) = 4, \: y'(1) = -1, \: \hat{y}(x) = 2(x^{1/2} + x^{-1}) 2 x 2 y ′′ + 3 x y ′ − y = 0 , 1 < x < 16 , y ( 1 ) = 4 , y ′ ( 1 ) = − 1 , y ^ ( x ) = 2 ( x 1/2 + x − 1 )
(g) x 2 y ′ ′ − x y ′ + 2 y = 0 , 1 < x < e π , y ( 1 ) = 3 , y ′ ( 1 ) = 4 x^2 y'' -xy' + 2y = 0,\: 1<x<e^{\pi}, \: y(1) = 3, \: y'(1) = 4 x 2 y ′′ − x y ′ + 2 y = 0 , 1 < x < e π , y ( 1 ) = 3 , y ′ ( 1 ) = 4 ;
y ^ ( x ) = x [ 3 cos ( ln x ) + sin ( ln x ) ] \quad \hat{y}(x) = x \left[ 3 \cos \left( \ln x \right)+\sin \left( \ln x \right) \right] y ^ ( x ) = x [ 3 cos ( ln x ) + sin ( ln x ) ]
(h) x 2 y ′ ′ + 3 x y ′ + 4 y = 0 , e π / 12 < x < e π , y ( e π / 12 ) = 0 , y ′ ( e π / 12 ) = − 6 x^2 y'' + 3xy' + 4y = 0,\: e^{\pi/12} < x < e^{\pi}, \: y(e^{\pi/12}) = 0, \: y'(e^{\pi/12}) = -6 x 2 y ′′ + 3 x y ′ + 4 y = 0 , e π /12 < x < e π , y ( e π /12 ) = 0 , y ′ ( e π /12 ) = − 6 ;
y ^ ( x ) = x − 1 [ 3 cos ( 3 ln x ) + sin ( 3 ln x ) ] \quad \hat{y}(x) = x^{-1} \left[ 3 \cos \left( 3 \ln x \right)+\sin \left( 3 \ln x \right) \right] y ^ ( x ) = x − 1 [ 3 cos ( 3 ln x ) + sin ( 3 ln x ) ]
⌨ Do Exercise 6.3.4 , but using Function 6.4.2 instead of solve
.
✍ Consider the problem u ′ = c u u'=c u u ′ = c u , u ( 0 ) = 1 u(0) = 1 u ( 0 ) = 1 for constant c c c and t > 0 t>0 t > 0 .
(a) Find an explicit formula in terms of h h h and c c c for u i + 1 / u i u_{i+1}/u_i u i + 1 / u i in the modified Euler method.
(b) Show that if c h = − 3 ch=-3 c h = − 3 , then ∣ u i ∣ → ∞ |u_i|\to\infty ∣ u i ∣ → ∞ as i → ∞ i\to\infty i → ∞ while the exact solution u ^ ( t ) \hat{u}(t) u ^ ( t ) approaches zero as t → ∞ t\to\infty t → ∞ .