Adaptive integration

using FundamentalsNumericalComputation
f = x -> (x+1)^2*cos((2*x+1)/(x-4.3));

@show exact,errest = quadgk(f,0,4,atol=1e-14,rtol=1e-14);  # 'exact' value
(exact, errest) = quadgk(f, 0, 4, atol = 1.0e-14, rtol = 1.0e-14) = (-2.825533373437446, 2.1510571102112408e-14)

We perform the integration and show the nodes selected underneath the curve.

Q,t = FNC.intadapt(f,0,4,0.001)
@show num_nodes = length(t);

    xlabel="x", ylabel="f(x)", title="Adaptive node selection")
num_nodes = length(t) = 69

The error turns out to be a bit more than we requested. It’s only an estimate, not a guarantee.

@show err = exact - Q;
err = exact - Q = -0.022002813037627078

Let’s see how the number of integrand evaluations and the error vary with the requested tolerance.

table = (tol=[],err=[],n=[])
for tol in 10.0.^(-4:-1:-14)
    Q,t = FNC.intadapt(f,0,4,tol)

tolerance error f-evaluations
0.0001 -0.00041946881777477074 113
1.0e-5 4.7897662537543795e-5 181
1.0e-6 6.314383849126415e-6 297
1.0e-7 -6.639249763296107e-7 489
1.0e-8 7.18080701567203e-8 757
1.0e-9 1.2652385805722588e-8 1193
1.0e-10 -8.441261023506286e-10 2009
1.0e-11 2.6127100483108734e-11 3157
1.0e-12 4.044631296551415e-11 4797
1.0e-13 -1.937117133365973e-12 7997
1.0e-14 1.6298074001497298e-13 12609

As you can see, even though the errors are not less than the estimates, the two columns decrease in tandem. If we consider now the convergence not in \(h\) (which is poorly defined) but in the number of nodes actually chosen, we come close to the fourth order accuracy of the underlying Simpson scheme.

n = table.n
    xaxis=(:log10,"number of nodes"), yaxis=(:log10,"error"),
    title="Convergence of adaptive quadrature")

order4 = @. 0.01*(n/n[1])^(-4)