# The Newton idea for systems¶

Let us use Newton’s method on the system defined by the function

using LinearAlgebra

function nlvalue(x)
return [ exp(x[2]-x[1]) - 2,
x[1]*x[2] + x[3],
x[2]*x[3] + x[1]^2 - x[2]
]
end

nlvalue (generic function with 1 method)


Here is a function that computes the Jacobian matrix.

function nljac(x)
J = zeros(3,3)
J[1,:] = [-exp(x[2]-x[1]), exp(x[2]-x[1]), 0]
J[2,:] = [x[2], x[1], 1]
J[3,:] = [2*x[1], x[3]-1, x[2]]
return J
end

nljac (generic function with 1 method)


(These functions could be written as separate files, or embedded within another function as we have done here.) Our initial guess at a root is the origin.

x = [0.,0.,0.]

3-element Array{Float64,1}:
0.0
0.0
0.0


We need the value (residual) of the nonlinear function, and its Jacobian, at this value for $$\mathbf{x}$$.

f = nlvalue(x)

3-element Array{Float64,1}:
-1.0
0.0
0.0

J = nljac(x)

3×3 Array{Float64,2}:
-1.0   1.0  0.0
0.0   0.0  1.0
0.0  -1.0  0.0


We solve for the Newton step and compute the new estimate.

s = -(J\f)
newx = x + s

3-element Array{Float64,1}:
-1.0
0.0
0.0


Here is the new residual.

nlvalue(newx)

3-element Array{Float64,1}:
0.7182818284590451
0.0
1.0


We don’t seem to be especially close to a root. Let’s iterate a few more times. To do that, we start keeping the root estimates as elements of a vector (i.e., a vector of vectors). Same for the residuals.

x = [x,newx]

2-element Array{Array{Float64,1},1}:
[0.0, 0.0, 0.0]
[-1.0, 0.0, 0.0]

y = [f,nlvalue(newx)]

2-element Array{Array{Float64,1},1}:
[-1.0, 0.0, 0.0]
[0.7182818284590451, 0.0, 1.0]

for n = 2:7
f = nlvalue(x[n])
push!(y,f)
s = -nljac(x[n])\f
push!(x,x[n]+s)
end


We find the infinity norms of the residuals.

residual_norm = [ norm(f,Inf) for f in y ]   # max in dimension 1

8-element Array{Float64,1}:
1.0
1.0
1.0
0.20229273399879752
0.010252098544050403
2.1556384274173945e-5
1.9899884518004285e-10
1.3877787807814457e-17


We don’t know an exact answer, so we will take the last computed value as its surrogate.

r = x[end]

3-element Array{Float64,1}:
-0.45803328064126886
0.23511389991867648
0.10768999090411434


The following will subtract r from every column of x.

error = [ norm(xk-r) for xk in x[1:end-1] ]

7-element Array{Float64,1}:
0.525994454581891
0.6005027934725727
0.15184335534811869
0.010200335585503865
2.4987473439459857e-5
1.5326877679492286e-10
1.3877787807814457e-17


Now we take infinity norms and check for quadratic convergence.

ratios = [ log(error[k+1]) / log(error[k]) for k in 1:length(error)-1 ]

6-element Array{Float64,1}:
0.7937993447128695
3.695980885448303
2.4326597889977144
2.3110932374368462
2.1325411123294264
1.7176219080468489


For a brief time, we see ratios around 2, but then the limitation of double precision makes it impossible for the doubling to continue.