Functions¶
Hat function
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
""" hatfun(t, k) Create a piecewise linear hat function, where `t` is a vector of n+1 interpolation nodes and `k` is an integer in 0:n giving the index of the node where the hat function equals one. """ function hatfun(t, k) n = length(t) - 1 return function (x) if k > 0 && t[k] ≤ x ≤ t[k+1] return (x - t[k]) / (t[k+1] - t[k]) elseif k < n && t[k+1] ≤ x ≤ t[k+2] return (t[k+2] - x) / (t[k+2] - t[k+1]) else return 0 end end end
Piecewise linear interpolation
1 2 3 4 5 6 7 8 9 10 11
""" plinterp(t, y) Construct a piecewise linear interpolating function for data values in `y` given at nodes in `t`. """ function plinterp(t, y) n = length(t) - 1 H = [hatfun(t, k) for k in 0:n] return x -> sum(y[k+1] * H[k+1](x) for k in 0:n) end
Cubic spline interpolation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
""" spinterp(t, y) Construct a cubic not-a-knot spline interpolating function for data values in `y` given at nodes in `t`. """ function spinterp(t, y) n = length(t) - 1 h = [t[k+1] - t[k] for k in 1:n] # Preliminary definitions. Z = zeros(n, n) In = I(n) E = In[1:n-1, :] J = diagm(0 => ones(n), 1 => -ones(n - 1)) H = diagm(0 => h) # Left endpoint interpolation: AL = [In Z Z Z] vL = y[1:n] # Right endpoint interpolation: AR = [In H H^2 H^3] vR = y[2:n+1] # Continuity of first derivative: A1 = E * [Z J 2 * H 3 * H^2] v1 = zeros(n - 1) # Continuity of second derivative: A2 = E * [Z Z J 3 * H] v2 = zeros(n - 1) # Not-a-knot conditions: nakL = [zeros(1, 3 * n) [1 -1 zeros(1, n - 2)]] nakR = [zeros(1, 3 * n) [zeros(1, n - 2) 1 -1]] # Assemble and solve the full system. A = [AL; AR; A1; A2; nakL; nakR] v = [vL; vR; v1; v2; 0; 0] z = A \ v # Break the coefficients into separate vectors. rows = 1:n a = z[rows] b = z[n.+rows] c = z[2*n.+rows] d = z[3*n.+rows] S = [Polynomial([a[k], b[k], c[k], d[k]]) for k in 1:n] # This function evaluates the spline when called with a value # for x. return function (x) if x < t[1] || x > t[n+1] # outside the interval return NaN elseif x == t[1] return y[1] else k = findlast(x .> t) # last node to the left of x return S[k](x - t[k]) end end end
Fornberg’s algorithm for finite difference weights
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
""" fdweights(t, m) Compute weights for the `m`th derivative of a function at zero using values at the nodes in vector `t`. """ function fdweights(t, m) # This is a compact implementation, not an efficient one. # Recursion for one weight. function weight(t, m, r, k) # Inputs # t: vector of nodes # m: order of derivative sought # r: number of nodes to use from t # k: index of node whose weight is found if (m < 0) || (m > r) # undefined coeffs must be zero c = 0 elseif (m == 0) && (r == 0) # base case of one-point interpolation c = 1 else # generic recursion if k < r c = (t[r+1] * weight(t, m, r-1, k) - m * weight(t, m-1, r-1, k)) / (t[r+1] - t[k+1]) else numer = r > 1 ? prod(t[r] - x for x in t[1:r-1]) : 1 denom = r > 0 ? prod(t[r+1] - x for x in t[1:r]) : 1 β = numer / denom c = β * (m * weight(t, m - 1, r-1, r-1) - t[r] * weight(t, m, r-1, r-1)) end end return c end r = length(t) - 1 w = zeros(size(t)) return [weight(t, m, r, k) for k in 0:r] end
Trapezoid formula for numerical integration
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
""" trapezoid(f, a, b, n) Apply the trapezoid integration formula for integrand `f` over interval [`a`,`b`], broken up into `n` equal pieces. Returns the estimate, a vector of nodes, and a vector of integrand values at the nodes. """ function trapezoid(f, a, b, n) h = (b - a) / n t = range(a, b, length = n + 1) y = f.(t) T = h * (sum(y[2:n]) + 0.5 * (y[1] + y[n+1])) return T, t, y end
Adaptive integration
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
""" intadapt(f, a, b, tol) Adaptively integrate `f` over [`a`,`b`] to within target error tolerance `tol`. Returns the estimate and a vector of evaluation nodes. """ function intadapt(f, a, b, tol, fa = f(a), fb = f(b), m = (a + b) / 2, fm = f(m)) # Use error estimation and recursive bisection. # These are the two new nodes and their f-values. xl = (a + m) / 2 fl = f(xl) xr = (m + b) / 2 fr = f(xr) # Compute the trapezoid values iteratively. h = (b - a) T = [0.0, 0.0, 0.0] T[1] = h * (fa + fb) / 2 T[2] = T[1] / 2 + (h / 2) * fm T[3] = T[2] / 2 + (h / 4) * (fl + fr) S = (4T[2:3] - T[1:2]) / 3 # Simpson values E = (S[2] - S[1]) / 15 # error estimate if abs(E) < tol * (1 + abs(S[2])) # acceptable error? Q = S[2] # yes--done nodes = [a, xl, m, xr, b] # all nodes at this level else # Error is too large--bisect and recurse. QL, tL = intadapt(f, a, m, tol, fa, fm, xl, fl) QR, tR = intadapt(f, m, b, tol, fm, fb, xr, fr) Q = QL + QR nodes = [tL; tR[2:end]] # merge the nodes w/o duplicate end return Q, nodes end
About the code
The intended way for a user to call Function 5.7.1 is with only f
, a
, b
, and tol
provided. We then use default values on the other parameters to compute the function values at the endpoints, the interval’s midpoint, and the function value at the midpoint. Recursive calls from within the function itself will provide all of that information, since it was already calculated along the way.
Examples¶
5.1 The interpolation problem¶
Example 5.1.1
Here are some points that we could consider to be observations of an unknown function on .
using Plots
n = 5
t = range(-1, 1, n+1)
y = @. t^2 + t + 0.05 * sin(20t)
scatter(t, y, label="data", legend=:top)
The polynomial interpolant, as computed using fit
, looks very sensible. It’s the kind of function you’d take home to meet your parents.
using Polynomials
p = Polynomials.fit(t, y, n) # interpolating polynomial
plot!(p, -1, 1, label="interpolant")
But now consider a different set of points generated in almost exactly the same way.
n = 18
t = range(-1, 1, n+1)
y = @. t^2 + t + 0.05 * sin(20t)
scatter(t, y, label="data", leg=:top)
The points themselves are unremarkable. But take a look at what happens to the polynomial interpolant.
p = Polynomials.fit(t, y, n)
x = range(-1, 1, 1000) # use a lot of points
plot!(x, p.(x), label="interpolant")
Surely there must be functions that are more intuitively representative of those points!
Example 5.1.3
Let us recall the data from Demo 5.1.1.
n = 12
t = range(-1, 1, n+1)
y = @. t^2 + t + 0.5 * sin(20t)
scatter(t, y, label="data", leg=:top)
Here is an interpolant that is linear between each consecutive pair of nodes, using plinterp
from Piecewise linear interpolation.
p = FNC.plinterp(t, y)
plot!(p, -1, 1, label="piecewise linear")
We may prefer a smoother interpolant that is piecewise cubic, generated using Spline1D
from the Dierckx
package.
using Dierckx
p = Spline1D(t, y)
plot!(x -> p(x), -1, 1, label="piecewise cubic")
Example 5.1.4
In Demo 5.1.1 and Demo 5.1.3 we saw a big difference between polynomial interpolation and piecewise polynomial interpolation of some arbitrarily chosen data. The same effects can be seen clearly in the cardinal functions, which are closely tied to the condition numbers.
n = 18
t = range(-1, 1, n+1)
y = [zeros(9); 1; zeros(n - 9)]; # data for 10th cardinal function
scatter(t, y, label="data")
ϕ = Spline1D(t, y)
plot!(x -> ϕ(x), -1, 1;
label="spline",
xlabel=L"x", ylabel=L"\phi(x)",
title="Piecewise cubic cardinal function")
The piecewise cubic cardinal function is nowhere greater than one in absolute value. This happens to be true for all the cardinal functions, ensuring a good condition number for any interpolation with these functions. But the story for global polynomials is very different.
scatter(t, y, label="data")
ϕ = Polynomials.fit(t, y, n)
plot!(x -> ϕ(x), -1, 1;
label="polynomial", legend=:top,
xlabel=L"x", ylabel=L"\phi(x)",
title="Polynomial cardinal function")
From the figure we can see that the condition number for polynomial interpolation on these nodes is at least 500.
5.2 Piecewise linear interpolation¶
Example 5.2.1
Let’s define a set of four nodes (i.e., in our formulas).
t = [0, 0.55, 0.7, 1]
4-element Vector{Float64}:
0.0
0.55
0.7
1.0
We plot the hat functions .
Tip
Use annotate!
to add text to a plot.
using Plots
plt = plot(layout=(4, 1), legend=:top,
xlabel=L"x", ylims=[-0.1, 1.1], ytick=[])
for k in 0:3
Hₖ = FNC.hatfun(t, k)
plot!(Hₖ, 0, 1, subplot=k + 1)
scatter!(t, Hₖ.(t), m=3, subplot=k + 1)
annotate!(t[k+1], 0.25, text(latexstring("H_$k"), 10), subplot=k+1)
end
plt
Example 5.2.2
We generate a piecewise linear interpolant of .
f = x -> exp(sin(7x))
plot(f, 0, 1, label="function", xlabel=L"x", ylabel=L"y")
First we sample the function to create the data.
t = [0, 0.075, 0.25, 0.55, 0.7, 1] # nodes
y = f.(t) # function values
scatter!(t, y, label="values at nodes")
Now we create a callable function that will evaluate the piecewise linear interpolant at any , and then plot it.
p = FNC.plinterp(t, y)
plot!(p, 0, 1, label="interpolant", title="PL interpolation")
Example 5.2.3
We measure the convergence rate for piecewise linear interpolation of over .
f = x -> exp(sin(7x))
x = range(0, 1, 10001) # sample the difference at many points
n = @. round(Int, 10^(1:0.25:3.5))
maxerr = zeros(length(n))
for (k, n) in enumerate(n)
t = (0:n) / n # interpolation nodes
p = FNC.plinterp(t, f.(t))
err = @. f(x) - p(x)
maxerr[k] = norm(err, Inf)
end
data = (n=n[1:4:end], err=maxerr[1:4:end])
@pt :header=["n", "max-norm error"] data
As predicted, a factor of 10 in produces a factor of 100 in the error. In a convergence plot, it is traditional to have decrease from left to right, so we expect a straight line of slope -2 on a log-log plot.
h = @. 1 / n
order2 = @. 10 * (h / h[1])^2
plot(h, maxerr, m=:o, label="error", xflip=true)
plot!(h, order2;
l=:dash, label=L"O(h^2)",
xaxis=(:log10, L"h"), yaxis=(:log10, L"|| f-p\, ||_\infty"),
title="Convergence of PL interpolation")
5.3 Cubic splines¶
Example 5.3.1
For illustration, here is a spline interpolant using just a few nodes.
using Plots
f = x -> exp(sin(7x))
plot(f, 0, 1, label="function", xlabel=L"x", ylabel=L"y")
t = [0, 0.075, 0.25, 0.55, 0.7, 1] # nodes
y = f.(t) # values at nodes
scatter!(t, y, label="values at nodes")
S = FNC.spinterp(t, y)
plot!(S, 0, 1, label="spline")
Now we look at the convergence rate as the number of nodes increases.
x = (0:10000) / 1e4 # sample the difference at many points
n = @. round(Int, 2^(3:0.5:7)) # numbers of nodes
err = zeros(length(n))
for (k, n) in enumerate(n)
t = (0:n) / n
S = FNC.spinterp(t, f.(t))
dif = @. f(x) - S(x)
err[k] = norm(dif, Inf)
end
@pt :header=["n", "max-norm error"] [n[1:2:end] err[1:2:end]]
Since we expect convergence that is , we use a log-log graph of error and expect a straight line of slope -4.
order4 = @. (n / n[1])^(-4)
plot(n, [err order4];
m=[:o :none], l=[:solid :dash],
label=["error" "4th order"],
xaxis=(:log10, "n"), yaxis=(:log10, L"|| f-S\,||_\infty"),
title="Convergence of spline interpolation")
5.4 Finite differences¶
Example 5.4.3
If , then .
f = x -> exp(sin(x));
Here are the first two centered differences from Table 5.4.1.
h = 0.05
CD2 = (-f(-h) + f(h)) / 2h
CD4 = (f(-2h) - 8f(-h) + 8f(h) - f(2h)) / 12h
@show (CD2, CD4);
(CD2, CD4) = (0.9999995835069508, 1.0000016631938748)
Here are the first two forward differences from Table 5.4.2.
FD1 = (-f(0) + f(h)) / h
FD2 = (-3f(0) + 4f(h) - f(2h)) / 2h
@show (FD1, FD2);
(FD1, FD2) = (1.024983957209069, 1.0000996111012461)
Finally, here are the backward differences that come from reverse-negating the forward differences.
BD1 = (-f(-h) + f(0)) / h
BD2 = (f(-2h) - 4f(-h) + 3f(0)) / 2h
@show (BD1, BD2);
(BD1, BD2) = (0.9750152098048326, 0.9999120340342049)
Example 5.4.4
If , then .
f = x -> exp(sin(x));
Here is a centered estimate given by (5.4.12).
h = 0.05
CD2 = (f(-h) - 2f(0) + f(h)) / h^2
@show CD2;
CD2 = 0.9993749480847745
For the same , here are forward estimates given by (5.4.13) and (5.4.14).
FD1 = (f(0) - 2f(h) + f(2h)) / h^2
FD2 = (2f(0) - 5f(h) + 4f(2h) - f(3h)) / h^2
@show (FD1, FD2);
(FD1, FD2) = (0.9953738443129188, 1.0078811479598213)
Finally, here are the backward estimates that come from reversing (5.4.13) and (5.4.14).
BD1 = (f(-2h) - 2f(-h) + f(0)) / h^2
BD2 = (-f(-3h) + 4f(-2h) - 5f(-h) + 2f(0)) / h^2
@show (BD1, BD2);
(BD1, BD2) = (0.9958729691748489, 1.0058928192789194)
Example 5.4.5
We will estimate the derivative of at using five nodes.
t = [0.35, 0.5, 0.57, 0.6, 0.75] # nodes
f = x -> cos(x^2)
df_dx = x -> -2 * x * sin(x^2)
exact_value = df_dx(0.5)
-0.24740395925452294
We have to shift the nodes so that the point of estimation for the derivative is at . (To subtract a scalar from a vector, we must use the .-
operator.)
w = FNC.fdweights(t .- 0.5, 1)
5-element Vector{Float64}:
-0.5303030303030298
-21.61904761904763
45.09379509379508
-23.333333333333307
0.38888888888888845
The finite-difference formula is a dot product (i.e., inner product) between the vector of weights and the vector of function values at the nodes.
fd_value = dot(w, f.(t))
-0.2473074229061363
We can reproduce the weights in the finite-difference tables by using equally spaced nodes with . For example, here is a one-sided formula at four nodes.
FNC.fdweights(0:3, 1)
4-element Vector{Float64}:
-1.8333333333333333
3.0
-1.5
0.3333333333333333
By giving nodes of type Rational
, we can get exact values instead.
FNC.fdweights(Rational.(0:3), 1)
4-element Vector{Rational{Int64}}:
-11//6
3
-3//2
1//3
5.5 Convergence of finite differences¶
Example 5.5.3
Let’s observe the convergence of the formulas in Example 5.5.1 and Example 5.5.2, applied to the function at .
f = x -> sin(exp(x + 1))
exact_value = exp(1) * cos(exp(1))
-2.478349732955235
We’ll compute the formulas in parallel for a sequence of values.
h = [5 / 10^n for n in 1:6]
FD = zeros(length(h), 2)
for (k, h) in enumerate(h)
FD[k, 1] = (f(h) - f(0)) / h
FD[k, 2] = (f(h) - f(-h)) / 2h
end
@pt :header=["h", "FD1", "FD2"] [h FD]
All that’s easy to see from this table is that FD2 appears to converge to the same result as FD1, but more rapidly. A table of errors is more informative.
error_FD = @. exact_value - FD
@pt :header=["h", "error in FD1", "error in FD2"] [h error_FD]
In each row, is decreased by a factor of 10, so that the error is reduced by a factor of 10 in the first-order method and 100 in the second-order method.
A graphical comparison can be useful as well. On a log-log scale, the error should (as ) be a straight line whose slope is the order of accuracy. However, it’s conventional in convergence plots to show decreasing from left to right, which negates the slopes.
using Plots
plot(h, abs.(error_FD);
m=:o, label=["FD1" "FD2"], leg=:bottomleft,
xflip=true, xaxis=(:log10, L"h"), yaxis=(:log10, "error"),
title="Convergence of finite differences")
# Add lines for perfect 1st and 2nd order.
plot!(h, [h h .^ 2], l=:dash, label=[L"O(h)" L"O(h^2)"])
Example 5.5.4
Let . We apply finite-difference formulas of first, second, and fourth order to estimate .
f = x -> exp(-1.3 * x);
exact = -1.3
h = [1 / 10^n for n in 1:12]
FD = zeros(length(h), 3)
for (k, h) in enumerate(h)
nodes = h * (-2:2)
vals = @. f(nodes)
FD[k, 1] = dot([0 0 -1 1 0] / h, vals)
FD[k, 2] = dot([0 -1 / 2 0 1 / 2 0] / h, vals)
FD[k, 3] = dot([1 / 12 -2 / 3 0 2 / 3 -1 / 12] / h, vals)
end
@pt :header=["h", "FD1", "FD2", "FD4"] [h FD]
They all seem to be converging to -1.3. The convergence plot reveals some interesting structure to the errors, though.
err = @. abs(FD - exact)
plot(h, err;
m=:o, label=["FD1" "FD2" "FD4"], legend=:bottomright,
xaxis=(:log10, L"h"), xflip=true, yaxis=(:log10, "error"),
title="FD error with roundoff")
# Add line for perfect 1st order.
plot!(h, 0.1 * eps() ./ h, l=:dash, color=:black, label=L"O(h^{-1})")
Again the graph is made so that decreases from left to right. The errors are dominated at first by truncation error, which decreases most rapidly for the fourth-order formula. However, increasing roundoff error eventually equals and then dominates the truncation error as continues to decrease. As the order of accuracy increases, the crossover point moves to the left (greater efficiency) and down (greater accuracy).
5.6 Numerical integration¶
Example 5.6.1
The antiderivative of is, of course, itself. That makes evaluation of by the Fundamental Theorem trivial.
exact = exp(1) - 1
1.718281828459045
The Julia package QuadGK
has an all-purpose numerical integrator that estimates the value without finding the antiderivative first. As you can see here, it’s often just as accurate.
using QuadGK
Q, errest = quadgk(x -> exp(x), 0, 1)
@show Q;
Q = 1.7182818284590453
The numerical approach is also far more robust. For example, has no useful antiderivative. But numerically, it’s no more difficult.
Q, errest = quadgk(x -> exp(sin(x)), 0, 1)
@show Q;
Q = 1.6318696084180515
When you look at the graphs of these functions, what’s remarkable is that one of these areas is basic calculus while the other is almost impenetrable analytically. From a numerical standpoint, they are practically the same problem.
using Plots
plot([exp, x -> exp(sin(x))], 0, 1, fill=0, layout=(2, 1),
xlabel=L"x", ylabel=[L"e^x" L"e^{\sin(x)}"], ylim=[0, 2.7])
Example 5.6.2
We will approximate the integral of the function over the interval .
f = x -> exp(sin(7 * x));
a = 0;
b = 2;
In lieu of the exact value, we use the QuadGK
package to find an accurate result.
Tip
If a function has multiple return values, you can use an underscore _
to indicate a return value you want to ignore.
Q, _ = quadgk(f, a, b, atol=1e-14, rtol=1e-14);
println("Integral = $Q")
Integral = 2.6632197827615394
Here is the trapezoid result at , and its error.
T, t, y = FNC.trapezoid(f, a, b, 40)
@show (T, Q - T);
(T, Q - T) = (2.662302935602287, 0.0009168471592522209)
In order to check the order of accuracy, we increase by orders of magnitude and observe how the error decreases.
n = [10^n for n in 1:5]
err = zeros(length(n))
for (k, n) in enumerate(n)
T, t, y = FNC.trapezoid(f, a, b, n)
err[k] = Q - T
end
@pt :header=["n", "error"] [n err]
Each increase by a factor of 10 in cuts the error by a factor of about 100, which is consistent with second-order convergence. Another check is that a log-log graph should give a line of slope -2 as .
plot(n, abs.(err);
m=:o, label="results",
xaxis=(:log10, L"n"), yaxis=(:log10, "error"),
title="Convergence of trapezoidal integration")
# Add line for perfect 2nd order.
plot!(n, 3e-3 * (n / n[1]) .^ (-2), l=:dash, label=L"O(n^{-2})")
Example 5.6.3
We estimate using extrapolation. First we use quadgk
to get an accurate value.
f = x -> x^2 * exp(-2x);
a = 0;
b = 2;
Q, _ = quadgk(f, a, b, atol=1e-14, rtol=1e-14)
@show Q;
Q = 0.1904741736116139
We start with the trapezoid formula on nodes.
N = 20; # the coarsest formula
n = N;
h = (b - a) / n;
t = h * (0:n);
y = f.(t);
We can now apply weights to get the estimate .
T = [h * (sum(y[2:n]) + y[1] / 2 + y[n+1] / 2)]
1-element Vector{Float64}:
0.19041144993926784
Now we double to , but we only need to evaluate at every other interior node and apply (5.6.18).
n = 2n;
h = h / 2;
t = h * (0:n);
T = [T; T[end] / 2 + h * sum(f.(t[2:2:n]))]
2-element Vector{Float64}:
0.19041144993926784
0.19045880585951175
We can repeat the same code to double again.
n = 2n;
h = h / 2;
t = h * (0:n);
T = [T; T[end] / 2 + h * sum(f.(t[2:2:n]))]
3-element Vector{Float64}:
0.19041144993926784
0.19045880585951175
0.1904703513046443
Let us now do the first level of extrapolation to get results from Simpson’s formula. We combine the elements T[i]
and T[i+1]
the same way for and .
S = [(4T[i+1] - T[i]) / 3 for i in 1:2]
2-element Vector{Float64}:
0.19047459116625973
0.19047419978635513
With the two Simpson values and in hand, we can do one more level of extrapolation to get a sixth-order accurate result.
R = (16S[2] - S[1]) / 15
0.1904741736943615
We can make a triangular table of the errors:
Tip
The value nothing
equals nothing except nothing
.
err = [T .- Q [nothing; S .- Q] [nothing; nothing; R - Q]]
@pt :header=["order 2", "order 4", "order 6"] err
If we consider the computational time to be dominated by evaluations of , then we have obtained a result with about twice as many accurate digits as the best trapezoid result, at virtually no extra cost.
5.7 Adaptive integration¶
Example 5.7.1
This function gets increasingly oscillatory as increases.
using Plots
f = x -> (x + 1)^2 * cos((2x + 1) / (x - 4.3))
plot(f, 0, 4, xlabel=L"x", ylabel=L"f(x)")
Accordingly, the trapezoid rule is more accurate on the left half of this interval than on the right half.
using QuadGK
left_val, _ = quadgk(f, 0, 2, atol=1e-14, rtol=1e-14)
right_val, _ = quadgk(f, 2, 4, atol=1e-14, rtol=1e-14)
n = [50 * 2^k for k in 0:3]
err = zeros(length(n), 2)
for (k, n) in enumerate(n)
T, _ = FNC.trapezoid(f, 0, 2, n)
err[k, 1] = T - left_val
T, _ = FNC.trapezoid(f, 2, 4, n)
err[k, 2] = T - right_val
end
@pt :header=["n", "left error", "right error"] [n err]
Both the picture and the numerical results suggest that more nodes should be used on the right half of the interval than on the left half.
Example 5.7.2
We’ll integrate the function from Demo 5.7.1.
f = x -> (x + 1)^2 * cos((2x + 1) / (x - 4.3));
We perform the integration and show the nodes selected underneath the curve.
A, t = FNC.intadapt(f, 0, 4, 0.001)
@show num_nodes = length(t);
plot(f, 0, 4;
color=:black, legend=:none,
xlabel=L"x", ylabel=L"f(x)",
title="Adaptive node selection")
plot!(t, f.(t), seriestype=:sticks, m=(:o, 2))
The error turns out to be a bit more than we requested. It’s only an estimate, not a guarantee.
Q, _ = quadgk(f, 0, 4, atol=1e-14, rtol=1e-14); # 'exact' value
println("error: $(Q-A)");
error: -0.022002813037627522
Let’s see how the number of integrand evaluations and the error vary with the requested tolerance.
tol = [1 / 10^k for k in 4:14]
err, n = [], []
for tol in 10.0 .^ (-4:-1:-14)
A, t = FNC.intadapt(f, 0, 4, tol)
push!(err, Q - A)
push!(n, length(t))
end
@pt :header=["tolerance", "error", "number of nodes"] [tol err n][1:2:end, :]
As you can see, even though the errors are not smaller than the tolerances, the two columns decrease in tandem. If we consider now the convergence not in , which is poorly defined now, but in the number of nodes actually chosen, we come close to the fourth-order accuracy of the underlying Simpson scheme.
plot(n, abs.(err);
m=:o, label="results",
xaxis=(:log10, "number of nodes"), yaxis=(:log10, "error"),
title="Convergence of adaptive integration")
order4 = @. 0.01 * (n / n[1])^(-4)
plot!(n, order4, l=:dash, label=L"O(n^{-4})")