Adaptive integration¶
To this point, we have used only equally spaced nodes to compute integrals. Yet there are problems in which non-uniformly distributed nodes would clearly be more appropriate, as demonstrated in An oscillatory integrand.
We would like an algorithm that automatically detects and reacts to a situation like that in An oscillatory integrand, a trait known as adaptivity.
Error estimation¶
Ideally, we would like to make adaptation decisions based on the error of the integration. Knowing the error exactly would be equivalent to knowing the exact answer, but we can estimate it using the extrapolation technique of Numerical integration. Consider the Simpson formula (156) resulting from one level of extrapolation from trapezoid estimates:
We expect this method to be fourth-order accurate, i.e.,
We can further extrapolate to sixth-order accuracy by (157),
By virtue of higher order of accuracy, \(R_f(4n)\) should be more accurate than \(S_f(4n)\). Hence a decent estimate of the error in the better of the two Simpson values is
Divide and conquer¶
If \(|E|\) is judged to be acceptably small, we are done. This judgment takes some care. For instance, suppose the exact integral is \(10^{20}\). Requiring \(|E| < \delta\ll 1\) would be fruitless in double precision, since it would require more than 20 accurate digits. Hence checking the absolute size of the error alone is not appropriate. Conversely, consider the integral
We are likely to sample values of the integrand that are larger than, say, \(1/2\) in absolute value, so obtaining this very small result has to rely on subtractive cancellation. We cannot hope for more than 4-5 accurate digits, so a strict test of the relative error is also not recommended.\footnote{In other words, we can have an error that is small relative to the data (the integrand), which is \(O(1)\), but not relative to the answer itself.} Typically we use both relative and absolute error, stopping when either one is considered small enough. Algebraically, the test is
where \(\delta_a\) and \(\delta_r\) are given absolute and relative error tolerances, respectively.
When \(|E|\) fails to meet (164), we bisect the interval \([a,b]\) to exploit the identity
and independently compute estimates to each of the half-length integrals. Each of these half-sized computations recursively applies Simpson’s formula and the error estimation criterion, making further bisections as necessary. Such an approach is called divide and conquer in computer science: recursively split the problem into easier pieces and glue the results together.
Implementation¶
It is typical to use just the minimal formula \(S_f(4)\) and its error estimate \(E\) to make decisions about adaptivity. A computation of \(S_f(4)\) requires three trapezoid estimates \(T_f(1)\), \(T_f(2)\), and \(T_f(4)\). As observed in (158) and Extrapolation for numerical integration, the five integrand evaluations in \(T_f(4)\) are sufficient to compute all of these values. There is one further exploitation of node locations to be found. For simplicity, assume \([a,b]=[0,1]\). The five nodes used in \(T_f(4)\) are
If we bisect the interval and compute \(T_f(4)\) on the subinterval \([0,1/2]\), we use the nodes
Only the second and fourth nodes are new. The same is true on the subinterval \([1/2,1]\), and for every recursive bisection.
intadapt shows how to exploit this structure. The nested function do_integral
does all of the work. It expects to receive the three nodes and integrand values that it shares with the level above. It adds the two new nodes and uses the set of all five to compute three trapezoid estimates with \(n=1\), \(n=2\), and \(n=4\), using the updating formula (158) twice. It goes on to find the two Simpson approximations and to estimate the error in the better one by (163).
If the error estimate passes the test (164), the better Simpson value is returned as the integral over the given interval. Otherwise, the interval is bisected, the two pieces computed using recursive calls, and those results are added to give the complete integral.
Adaptive integration with error estimation.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | """
intadapt(f,a,b,tol)
Do adaptive integration to estimate the integral of `f` over
[`a`,`b`] to desired error tolerance `tol`. Returns estimate and a
vector of evaluation nodes used.
"""
function intadapt(f,a,b,tol)
# Use error estimation and recursive bisection.
function do_integral(a,fa,b,fb,m,fm,tol)
# These are the two new nodes and their f-values.
xl = (a+m)/2; fl = f(xl);
xr = (m+b)/2; fr = f(xr);
t = [a,xl,m,xr,b] # all 5 nodes at this level
# Compute the trapezoid values iteratively.
h = (b-a)
T = [0.,0.,0.]
T[1] = h*(fa+fb)/2
T[2] = T[1]/2 + (h/2)*fm
T[3] = T[2]/2 + (h/4)*(fl+fr)
S = (4*T[2:3]-T[1:2]) / 3 # Simpson values
E = (S[2]-S[1]) / 15 # error estimate
if abs(E) < tol*(1+abs(S[2])) # acceptable error?
Q = S[2] # yes--done
else
# Error is too large--bisect and recurse.
QL,tL = do_integral(a,fa,m,fm,xl,fl,tol)
QR,tR = do_integral(m,fm,b,fb,xr,fr,tol)
Q = QL + QR
t = [tL;tR[2:end]] # merge the nodes w/o duplicate
end
return Q,t
end
m = (b+a)/2
Q,t = do_integral(a,f(a),b,f(b),m,f(m),tol)
return Q,t
end
|
Although adaptivity and the error estimation that goes with it can be very powerful, they come at some cost. The error estimation cannot be universally perfect, so sometimes the answer will not be as accurate as requested (underestimation) and sometimes the function will be evaluated more times than necessary (overestimation). Subtle problems may arise when the integral is a step within a larger computation (see this exercise below).
Exercises¶
⌨ For each integral below, use intadapt with error tolerance \(10^{-2}\), \(10^{-3}\), \ldots, \(10^{-12}\). Make a table of errors and the number of integrand evaluation nodes used, and use a convergence plot as in Adaptive integration to compare to fourth-order accuracy. (These integrals were taken from [BJL05].)
(a) \(\displaystyle \int_0^1 x\log(1+x)\, dx = \frac{1}{4}\)
(b) \(\displaystyle \int_0^1 x^2 \tan^{-1}x\, dx = \frac{\pi-2+2\log 2}{12}\)
(c) \(\displaystyle \int_0^{\pi/2}e^x \cos x\, dx = \frac{e^{\pi/2}-1}{2}\)
(d) \(\displaystyle \int_{0}^1 \sqrt{x} \log(x) \, dx = -\frac{4}{9}\) (Note: Although the integrand has the limiting value zero as \(x\to 0\), you have to implement the function carefully to return zero as the value of \(f(0)\), or start the integral at \(x=\macheps\).)
(d) \(\displaystyle \int_0^1 \sqrt{1-x^2}\, dx = \frac{\pi}{4}\)
⌨ For each integral below: (i) use
quadgk
to find the value to at least 12 digits; (ii) use intadapt to evaluate the integral to a tolerance of \(10^{-8}\); (iii) compute the absolute error and the number of nodes used; (iv) use the \(O(h^2)\) term in the Euler–Maclaurin formula (151) to estimate how many nodes are required by the fixed-stepsize trapezoidal formula to reach an absolute error of \(10^{-8}\).(a) \(\displaystyle \int_{0.1}^3 \operatorname{sech}(\sin(1/x))\d x\)
(b) \(\displaystyle\int_{-0.9}^9 \ln((x+1)^3))\d x\)
(c) \(\displaystyle\int_{-\pi}^\pi \cos(x^3)\d x\)
⌨ An integral such as \(\displaystyle \int_0^1 x^{-\gamma}\, dx\) for \(\gamma>0\), in which the integrand blows up at one or both ends, is known as an improper integral. It has a finite value if \(\gamma<1\), despite the singularity. One way to deal with the problem of the infinite value for \(f(t_0)\) is to replace the lower limit with a small number \(\epsilon\). Using intadapt with a small tolerance, make a log–log plot of the error as a function of \(\epsilon\) for \(\epsilon=10^{-15},10^{-16},\ldots,10^{-45}\). (A more robust way to handle improper integrals is discussed in a later chapter.)
⌨ A curious consequence of our logic in intadapt is that the algorithm uses what we believe to be a more accurate, sixth-order answer only for estimating error; the returned value is the supposedly less accurate \(S_f(2n)\). The practice of returning the extrapolated \(R_f(4n)\) instead is called local extrapolation local extrapolation. Modify intadapt to use local extrapolation and repeat problem 1 above. Is the convergence more like 4th order or 6th order?
⌨ The sine integral function is defined by
\[\operatorname{Si}(x) = \int_0^x \frac{\sin z}{z}\, dz.\]Use intadapt to plot Si over the interval \([1,10]\). Note: You will need to replace the lower bound of integration by \(\macheps\).
⌨ Adaptive integration can have subtle drawbacks. This exercise is based on the error function, a smooth function defined as
\[\operatorname{erf}(x) = \frac{2}{\pi}\int_0^x e^{-s^2}\,ds.\](a) Define a function \(g\) that approximates erf by applying trapezoid with \(n=100\). Make a plot of the error \(g(x)-\operatorname{erf}(x)\) at 300 points in the interval \([0,3]\).
(b) Define another approximation \(h\) that applies intadapt with error tolerance \(10^{-7}\). Plot the error in \(h\) as in part~(a). Why does it look so different from the previous case?
(c) Suppose you wished to find \(x\) such that \(\operatorname{erf}(x) = .95\) by using rootfinding on one of your two approximations. Which would be preferable?