Convergence of Newton’s method¶

We again look at finding a solution of $$xe^x=2$$ near $$x=1$$. To apply Newton’s method, we need to calculate values of both the residual function $$f$$ and its derivative.

using FundamentalsNumericalComputation

f = x -> x*exp(x) - 2;
dfdx = x -> exp(x)*(x+1);


We don’t know the exact root, so we use nlsolve to determine the “true” value.

r = nlsolve(x -> f(x[1]),[1.]).zero

1-element Array{Float64,1}:
0.8526055020137259


We use $$x_1=1$$ as a starting guess and apply the iteration in a loop, storing the sequence of iterates in a vector.

x = [1;zeros(6)]
for k = 1:6
x[k+1] = x[k] - f(x[k]) / dfdx(x[k])
end
x

7-element Array{Float64,1}:
1.0
0.8678794411714423
0.8527833734164099
0.8526055263689221
0.852605502013726
0.8526055020137254
0.8526055020137254


Here is the sequence of errors.

err = @. x - r

7-element Array{Float64,1}:
0.14739449798627413
0.015273939157716465
0.00017787140268399337
2.4355196193148743e-8
1.1102230246251565e-16
-4.440892098500626e-16
-4.440892098500626e-16


Glancing at the exponents of the errors, they roughly form a neat doubling sequence until the error is comparable to machine precision. We can see this more precisely by taking logs.

logerr = @. log(abs(err))

7-element Array{Float64,1}:
-1.9146426270180508
-4.181607225912875
-8.634449725601385
-17.53052061416097
-36.7368005696771
-35.35050620855721
-35.35050620855721


Quadratic convergence isn’t as graphically distinctive as linear convergence.

plot(0:6,abs.(err),m=:o,legend=:none,
xlabel="\$k\$", yaxis=(:log10,"\$|x_k-r|\$"), title="Quadratic convergence")


This looks faster than linear convergence, but it’s not easy to say more. If we could use infinite precision, the curve would continue to steepen forever.