Convergence of fixed point iteration

We revisit Fixed point iteration and investigate the observed convergence more closely. Recall that above we calculated g(r)0.42 at the convergent fixed point.

using FundamentalsNumericalComputation
Copy to clipboard
p = Polynomial([3.5,-4,1])
r = roots(p)
@show rmin,rmax = sort(r);
Copy to clipboard
(rmin, rmax) = sort(r) = [1.2928932188134525, 2.7071067811865475]
Copy to clipboard

Here is the fixed point iteration. This time we keep track of the whole sequence of approximations.

g = x -> x - p(x);
x = 2.1; 
for k = 1:12
    x = [x;g(x[k])]
end
x
Copy to clipboard
13-element Array{Float64,1}:
 2.1
 2.59
 2.7419000000000002
 2.69148439
 2.713333728386328
 2.7044887203327885
 2.7081843632566587
 2.7066592708954196
 2.7072919457529734
 2.7070300492259465
 2.707138558717502
 2.707093617492436
 2.7071122335938966
Copy to clipboard

It’s easiest to construct and plot the sequence of errors.

err = @. abs(x - rmax)
plot(0:12,err,m=:o,
    leg=:none,xaxis=("iteration number"),yaxis=("error",:log10),title="Convergence of fixed point iteration")
Copy to clipboard

It’s quite clear that the convergence quickly settles into a linear rate. We could estimate this rate by doing a least-squares fit to a straight line. Keep in mind that the values for small k should be left out of the computation, as they don’t represent the linear trend.

y = log.(err[5:12])
p = fit(5.0:12.0,y,1)
Copy to clipboard
-0.6680573888302028 - 0.8807181589734455∙x

We can exponentiate the slope to get the convergence constant σ.

sigma = exp(p.coeffs[2])
Copy to clipboard
0.41448513854854724
Copy to clipboard

The numerical values of the error should decrease by a factor of σ at each iteration. We can check this easily with an elementwise division.

@. err[9:12] / err[8:11]
Copy to clipboard
4-element Array{Float64,1}:
 0.4137660520817109
 0.4143987269383
 0.4141368304124451
 0.4142453399049934
Copy to clipboard

The methods for finding σ agree well.