Convergence of Euler’s method

We consider the IVP \(u'=\sin[(u+t)^2]\) over \(0\le t \le 4\), with \(u(0)=-1\).

using FundamentalsNumericalComputation
f = (u,p,t) -> sin((t+u)^2);
tspan = (0.0,4.0);
u0 = -1.0;

ivp = ODEProblem(f,u0,tspan)
ODEProblem with uType Float64 and tType Float64. In-place: false
timespan: (0.0, 4.0)
u0: -1.0
t,u = FNC.euler(ivp,20)

plot(t,u,m=:o,label="n=20",
    xlabel="t", ylabel="u(t)", title="Solution by Euler's method" )

We could define a different interpolant to get a smoother picture above, but the derivation assumed the piecewise linear interpolant, so it is the most meaningful one. We can instead request more steps to make the interpolant look smoother.

t,u = FNC.euler(ivp,200)
plot!(t,u,label="n=200")

Increasing \(n\) changed the solution noticeably. Since we know that interpolants and finite differences become more accurate as \(h\to 0\), we should expect that from Euler’s method too.

We don’t have an exact solution to compare to, so we will use a DifferentialEquations solver to construct an accurate solution.

u_exact = solve(ivp,Tsit5(),reltol=1e-14,abstol=1e-14)

plot!(u_exact,l=:dash,label="accurate")

Now we can perform a convergence study.

n = @. 50*2^(0:5)
err = zeros(size(n))
for (j,n) = enumerate(n)
    t,u = FNC.euler(ivp,n)
    err[j] = norm(u_exact.(t)-u,Inf)
end

pretty_table((n=n,err=err),["n","max-norm error"],backend=:html)
n max-norm error
50 0.029996164425832583
100 0.014229199704658635
200 0.00694432968416131
400 0.003429468304963379
800 0.0017041006511159251
1600 0.0008494164953237737

The error is almost perfectly halved at each step, so we expect that a log-log plot will reveal first-order convergence.

plot(n,err,m=:o,label="results", 
    xaxis=(:log10,"n"), yaxis=(:log10,"inf-norm error"), title="Convergence of Euler's method")

plot!(n,0.05*(n/n[1]).^(-1),l=:dash,label="1st order")