Python implementations
Functions¶
Newton’s method
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
def newton(f, dfdx, x1): """ newton(f, dfdx, x1) Use Newton's method to find a root of `f` starting from `x1`, where `dfdx` is the derivative of `f`. Returns a vector of root estimates. """ # Operating parameters. eps = np.finfo(float).eps funtol = 100 * eps xtol = 100 * eps maxiter = 40 x = np.zeros(maxiter) x[0] = x1 y = f(x1) dx = np.inf # for initial pass below k = 0 while (abs(dx) > xtol) and (abs(y) > funtol) and (k < maxiter): dydx = dfdx(x[k]) dx = -y / dydx # Newton step x[k + 1] = x[k] + dx # new estimate k = k + 1 y = f(x[k]) if k == maxiter: warnings.warn("Maximum number of iterations reached.") return x[: k + 1]
About the code
Function 4.3.2 accepts keyword arguments. In the function declaration, these follow the semicolon, and when the function is called, they may be supplied as keyword=value
in the argument list. Here, these arguments are also given default values by the assignments within the declaration. This arrangement is useful when there are multiple optional arguments, because the ordering of them doesn’t matter.
The break
statement, seen here in line 25, causes an immediate exit from the innermost loop in which it is called. It is often used as a safety valve to escape an iteration that may not be able to terminate otherwise.
Secant method
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
def secant(f, x1, x2): """ secant(f, x1, x2) Use the secant method to find a root of `f` starting from `x1` and `x2`. Returns a vector of root estimates. """ # Operating parameters. eps = np.finfo(float).eps funtol = 100 * eps xtol = 100 * eps maxiter = 40 x = np.zeros(maxiter) x[:2] = [x1, x2] y1 = f(x1) y2 = 100 dx = np.inf # for initial pass below k = 1 while (abs(dx) > xtol) and (abs(y2) > funtol) and (k < maxiter): y2 = f(x[k]) dx = -y2 * (x[k] - x[k - 1]) / (y2 - y1) # secant step x[k + 1] = x[k] + dx # new estimate k = k + 1 y1 = y2 # current f-value becomes the old one next time if k == maxiter: warnings.warn("Maximum number of iterations reached.") return x[:k+1]
About the code
Because we want to observe the convergence of the method, Function 4.4.2 stores and returns the entire sequence of root estimates. However, only the most recent two are needed by the iterative formula. This is demonstrated by the use of y₁
and y₂
for the two most recent values of .
Newton’s method for systems
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
def newtonsys(f, jac, x1): """ newtonsys(f, jac, x1) Use Newton's method to find a root of a system of equations, starting from `x1`. The function `f` should return the residual vector, and the function `jac` should return the Jacobian matrix. Returns root estimates as a matrix, one estimate per column. """ # Operating parameters. funtol = 1000 * np.finfo(float).eps xtol = 1000 * np.finfo(float).eps maxiter = 40 x = np.zeros((len(x1), maxiter)) x[:, 0] = x1 y, J = f(x1), jac(x1) dx = 10.0 # for initial pass below k = 0 while (norm(dx) > xtol) and (norm(y) > funtol) and (k < maxiter): dx = -lstsq(J, y)[0] # Newton step x[:, k+1] = x[:, k] + dx k = k + 1 y, J = f(x[:, k]), jac(x[:, k]) if k == maxiter: warnings.warn("Maximum number of iterations reached.") return x[:, :k+1]
About the code
The output of Function 4.5.2 is a vector of vectors representing the entire history of root estimates. Since these should be in floating point, the starting value is converted with float
before the iteration starts.
Finite differences for Jacobian
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
def fdjac(f, x0, y0): """ fdjac(f,x0,y0) Compute a finite-difference approximation of the Jacobian matrix for `f` at `x0`, where `y0`=`f(x0)` is given. """ delta = np.sqrt(np.finfo(float).eps) # FD step size m, n = len(y0), len(x0) J = np.zeros((m, n)) I = np.eye(n) for j in range(n): J[:, j] = (f(x0 + delta * I[:, j]) - y0) / delta return J
About the code
Function 4.6.1 is written to accept the case where maps variables to values with , in anticipation of Nonlinear least squares.
Note that a default value is given for the third argument y₀
, and it refers to earlier arguments in the list. The reason is that in some contexts, the caller of fdjac
may have already computed y₀
and can supply it without computational cost, while in other contexts, it must be computed fresh. The configuration here adapts to either situation.
Levenberg’s method
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55
def levenberg(f, x1, tol=1e-12): """ levenberg(f,x1,tol) Use Levenberg's quasi-Newton iteration to find a root of the system `f`, starting from `x1`, with `tol` as the stopping tolerance in both step size and residual norm. Returns root estimates as a matrix, one estimate per column. """ # Operating parameters. ftol = tol xtol = tol maxiter = 40 n = len(x1) x = np.zeros((n, maxiter)) x[:, 0] = x1 fk = f(x1) k = 0 s = 10.0 Ak = fdjac(f, x[:, 0], fk) # start with FD Jacobian jac_is_new = True lam = 10 while (norm(s) > xtol) and (norm(fk) > ftol) and (k < maxiter): # Compute the proposed step. B = Ak.T @ Ak + lam * np.eye(n) z = Ak.T @ fk s = -lstsq(B, z)[0] xnew = x[:, k] + s fnew = f(xnew) # Do we accept the result? if norm(fnew) < norm(fk): # accept y = fnew - fk x[:, k + 1] = xnew fk = fnew k = k + 1 lam = lam / 10 # get closer to Newton # Broyden update of the Jacobian. Ak = Ak + np.outer(y - Ak @ s, s / np.dot(s, s)) jac_is_new = False else: # don't accept # Get closer to steepest descent. lam = lam * 4 # Re-initialize the Jacobian if it's out of date. if not jac_is_new: Ak = fdjac(f, x[:, k], fk) jac_is_new = True if norm(fk) > 1e-3: warnings.warn("Iteration did not find a root.") return x[:, :k+1]
Examples¶
from numpy import *
from matplotlib.pyplot import *
from numpy.linalg import solve, norm
import scipy.sparse as sparse
from scipy.sparse.linalg import splu
from timeit import default_timer as timer
from prettytable import PrettyTable
import FNC
Section 4.1¶
The rootfinding problem for Bessel functions
import scipy.special as special
def J3(x):
return special.jv(3.0, x)
xx = linspace(0, 20, 500)
fig, ax = subplots()
ax.plot(xx, J3(xx))
ax.grid()
xlabel("$x$"), ylabel("$J_3(x)$")
title("Bessel function")
From the graph we see roots near 6, 10, 13, 16, and 19. We use root_scalar
from the scipy.optimize
package to find these roots accurately.
from scipy.optimize import root_scalar
omega = []
for guess in [6.0, 10.0, 13.0, 16.0, 19.0]:
s = root_scalar(J3, bracket=[guess - 0.5, guess + 0.5]).root
omega.append(s)
results = PrettyTable()
results.add_column("root estimate", omega)
results.add_column("function value", [J3(ω) for ω in omega])
print(results)
ax.scatter(omega, J3(omega))
ax.set_title("Bessel function roots")
fig
If instead we seek values at which , then we must find roots of the function .
omega = []
for guess in [3., 6., 10., 13.]:
f = lambda x: J3(x) - 0.2
s = root_scalar(f, x0=guess).root
omega.append(s)
ax.scatter(omega, J3(omega))
fig
Condition number of a rootfinding problem
Consider first the function
f = lambda x: (x - 1) * (x - 2)
At the root , we have . If the values of were perturbed at every point by a small amount of noise, we can imagine finding the root of the function drawn with a thick ribbon, giving a range of potential roots.
xx = linspace(0.8, 1.2, 400)
plot(xx, f(xx))
plot(xx, f(xx) + 0.02, "k")
plot(xx, f(xx) - 0.02, "k")
axis("equal"), grid(True)
xlabel("x"), ylabel("f(x)")
title("Well-conditioned root")
The possible values for a perturbed root all lie within the interval where the ribbon intersects the -axis. The width of that zone is about the same as the vertical thickness of the ribbon.
By contrast, consider the function
f = lambda x: (x - 1) * (x - 1.01)
Now , and the graph of will be much shallower near . Look at the effect this has on our thick rendering:
xx = linspace(0.8, 1.2, 400)
plot(xx, f(xx))
plot(xx, f(xx) + 0.02, "k")
plot(xx, f(xx) - 0.02, "k")
axis("equal"), grid(True)
xlabel("x"), ylabel("f(x)")
title("Poorly-conditioned root")
The vertical displacements in this picture are exactly the same as before. But the potential horizontal displacement of the root is much wider. In fact, if we perturb the function entirely upward by the amount drawn here, the root disappears!
Section 4.2¶
Fixed-point iteration
Let’s convert the roots of a quadratic polynomial to a fixed point problem.
f = poly1d([1, -4, 3.5])
r = f.roots
print(r)
We define .
g = lambda x: x - f(x)
Intersections of with the line are fixed points of and thus roots of . (Only one is shown in the chosen plot range.)
fig, ax = subplots()
g = lambda x: x - f(x)
xx = linspace(2, 3, 400)
ax.plot(xx, g(xx), label="y=g(x)")
ax.plot(xx, xx, label="y=x")
axis("equal"), legend()
title("Finding a fixed point")
If we evaluate , we get a value of almost 2.6, so this is not a fixed point.
x = 2.1
y = g(x)
print(y)
However, is considerably closer to the fixed point at around 2.7 than is. Suppose then that we adopt as our new value. Changing the coordinate in this way is the same as following a horizontal line over to the graph of .
ax.plot([x, y], [y, y], "r:", label="")
fig
Now we can compute a new value for . We leave alone here, so we travel along a vertical line to the graph of .
x = y
y = g(x)
print("y:", y)
ax.plot([x, x], [x, y], "k:")
fig
You see that we are in a position to repeat these steps as often as we like. Let’s apply them a few times and see the result.
for k in range(5):
ax.plot([x, y], [y, y], "r:")
x = y # y --> new x
y = g(x) # g(x) --> new y
ax.plot([x, x], [x, y], "k:")
fig
The process spirals in beautifully toward the fixed point we seek. Our last estimate has almost 4 accurate digits.
print(abs(y - max(r)) / max(r))
Now let’s try to find the other fixed point in the same way. We’ll use 1.3 as a starting approximation.
xx = linspace(1, 2, 400)
fig, ax = subplots()
ax.plot(xx, g(xx), label="y=g(x)")
ax.plot(xx, xx, label="y=x")
ax.set_aspect(1.0)
ax.legend()
x = 1.3
y = g(x)
for k in range(5):
ax.plot([x, y], [y, y], "r:")
x = y
y = g(x)
ax.plot([x, x], [x, y], "k:")
ylim(1, 2.5)
title("No convergence")
This time, the iteration is pushing us away from the correct answer.
Convergence of fixed-point iteration
We revisit Demo 4.2.1 and investigate the observed convergence more closely. Recall that above we calculated at the convergent fixed point.
f = poly1d([1, -4, 3.5])
r = f.roots
print(r)
Here is the fixed point iteration. This time we keep track of the whole sequence of approximations.
g = lambda x: x - f(x)
x = zeros(12)
x[0] = 2.1
for k in range(11):
x[k + 1] = g(x[k])
print(x)
It’s illuminating to construct and plot the sequence of errors.
err = abs(x - max(r))
semilogy(err, "-o")
xlabel("iteration number"), ylabel("error")
title("Convergence of fixed point iteration")
It’s quite clear that the convergence quickly settles into a linear rate. We could estimate this rate by doing a least-squares fit to a straight line. Keep in mind that the values for small should be left out of the computation, as they don’t represent the linear trend.
p = polyfit(arange(5, 13), log(err[4:]), 1)
print(p)
We can exponentiate the slope to get the convergence constant σ.
print("sigma:", exp(p[0]))
The error should therefore decrease by a factor of σ at each iteration. We can check this easily from the observed data.
err[8:] / err[7:-1]
The methods for finding σ agree well.
Section 4.3¶
Graphical interpretation of Newton’s method
Suppose we want to find a root of this function:
f = lambda x: x * exp(x) - 2
xx = linspace(0, 1.5, 400)
fig, ax = subplots()
ax.plot(xx, f(xx), label="function")
ax.grid()
ax.set_xlabel("$x$")
ax.set_ylabel("$y$")
From the graph, it is clear that there is a root near . So we call that our initial guess, .
x1 = 1
y1 = f(x1)
ax.plot(x1, y1, "ko", label="initial point")
ax.legend()
fig
Next, we can compute the tangent line at the point , using the derivative.
df_dx = lambda x: exp(x) * (x + 1)
slope1 = df_dx(x1)
tangent1 = lambda x: y1 + slope1 * (x - x1)
ax.plot(xx, tangent1(xx), "--", label="tangent line")
ax.set_ylim(-2, 4)
ax.legend()
fig
In lieu of finding the root of itself, we settle for finding the root of the tangent line approximation, which is trivial. Call this , our next approximation to the root.
x2 = x1 - y1 / slope1
ax.plot(x2, 0, "ko", label="tangent root")
ax.legend()
fig
y2 = f(x2)
print(y2)
The residual (i.e., value of ) is smaller than before, but not zero. So we repeat the process with a new tangent line based on the latest point on the curve.
xx = linspace(0.83, 0.88, 200)
plot(xx, f(xx))
plot(x2, y2, "ko")
grid(), xlabel("$x$"), ylabel("$y$")
slope2 = df_dx(x2)
tangent2 = lambda x: y2 + slope2 * (x - x2)
plot(xx, tangent2(xx), "--")
x3 = x2 - y2 / slope2
plot(x3, 0, "ko")
title("Second iteration")
y3 = f(x3)
print(y3)
Judging by the residual, we appear to be getting closer to the true root each time.
Convergence of Newton’s method
We again look at finding a solution of near . To apply Newton’s method, we need to calculate values of both the residual function and its derivative.
f = lambda x: x * exp(x) - 2
df_dx = lambda x: exp(x) * (x + 1)
We don’t know the exact root, so we use nlsolve
to determine a proxy for it.
r = root_scalar(f, bracket=[0.8, 1.0]).root
print(r)
We use as a starting guess and apply the iteration in a loop, storing the sequence of iterates in a vector.
x = ones(5)
for k in range(4):
x[k + 1] = x[k] - f(x[k]) / df_dx(x[k])
print(x)
Here is the sequence of errors.
err = x - r
print(err)
The exponents in the scientific notation definitely suggest a squaring sequence. We can check the evolution of the ratio in (4.3.9).
logerr = log(abs(err))
for i in range(len(err) - 1):
print(logerr[i+1] / logerr[i])
The clear convergence to 2 above constitutes good evidence of quadratic convergence.
Using Newton’s method
Suppose we want to evaluate the inverse of the function . This means solving , or , for when is given. That equation has no solution in terms of elementary functions. If a value of is given numerically, though, we simply have a rootfinding problem for .
The enumerate
function produces a pair of values for each iteration: a positional index and the corresponding contents.
h = lambda x: exp(x) - x
dh_dx = lambda x: exp(x) - 1
y_ = linspace(h(0), h(2), 200)
x_ = zeros(y_.shape)
for (i, y) in enumerate(y_):
f = lambda x: h(x) - y
df_dx = lambda x: dh_dx(x)
x = FNC.newton(f, df_dx, y)
x_[i] = x[-1]
plot(x_, y_, label="$y=h(x)$")
plot(y_, x_, label="$y=h^{-1}(x)$")
plot([0, max(y_)], [0, max(y_)], 'k--', label="")
title("Function and its inverse")
xlabel("x"), ylabel("y"), axis("equal")
ax.grid(), legend()
Section 4.4¶
Graphical interpretation of the secant method
f = lambda x: x * exp(x) - 2
xx = linspace(0.25, 1.25, 400)
fig, ax = subplots()
ax.plot(xx, f(xx), label="function")
ax.set_xlabel("$x$")
ax.set_ylabel("$f(x)$")
ax.grid()
From the graph, it’s clear that there is a root near . To be more precise, there is a root in the interval . So let us take the endpoints of that interval as two initial approximations.
x1 = 1
y1 = f(x1)
x2 = 0.5
y2 = f(x2)
ax.plot([x1, x2], [y1, y2], "ko", label="initial points")
ax.legend()
fig
Instead of constructing the tangent line by evaluating the derivative, we can construct a linear model function by drawing the line between the two points and . This is called a secant line.
slope2 = (y2 - y1) / (x2 - x1)
secant2 = lambda x: y2 + slope2 * (x - x2)
ax.plot(xx, secant2(xx), "--", label="secant line")
ax.legend()
fig
As before, the next root estimate in the iteration is the root of this linear model.
x3 = x2 - y2 / slope2
ax.plot(x3, 0, "o", label="root of secant")
y3 = f(x3)
print(y3)
ax.legend()
fig
For the next linear model, we use the line through the two most recent points. The next iterate is the root of that secant line, and so on.
slope3 = (y3 - y2) / (x3 - x2)
x4 = x3 - y3 / slope3
print(f(x4))
Convergence of the secant method
We check the convergence of the secant method from Demo 4.4.1.
f = lambda x: x * exp(x) - 2
x = FNC.secant(f, 1, 0.5)
print(x)
We don’t know the exact root, so we use root_scalar
to get a substitute.
from scipy.optimize import root_scalar
r = root_scalar(f, bracket=[0.5, 1]).root
print(r)
Here is the sequence of errors.
err = r - x
print(err)
It’s not easy to see the convergence rate by staring at these numbers. We can use (4.4.8) to try to expose the superlinear convergence rate.
logerr = log(abs(err))
for i in range(len(err) - 2):
print(logerr[i+1] / logerr[i])
As expected, this settles in at around 1.618.
Inverse quadratic interpolation
Here we look for a root of that is close to 1.
f = lambda x: x + cos(10 * x)
xx = linspace(0.5, 1.5, 400)
fig, ax = subplots()
ax.plot(xx, f(xx), label="function")
ax.grid()
xlabel("$x$"), ylabel("$y$")
fig
We choose three values to get the iteration started.
x = array([0.8, 1.2, 1])
y = f(x)
ax.plot(x, y, "ko", label="initial points")
ax.legend()
fig
If we were using forward interpolation, we would ask for the polynomial interpolant of as a function of . But that parabola has no real roots.
q = poly1d(polyfit(x, y, 2)) # interpolating polynomial
ax.plot(xx, q(xx), "--", label="interpolant")
ax.set_ylim(-0.1, 3), ax.legend()
fig
To do inverse interpolation, we swap the roles of and in the interpolation.
plot(xx, f(xx), label="function")
plot(x, y, "ko", label="initial points")
q = poly1d(polyfit(y, x, 2)) # inverse interpolating polynomial
yy = linspace(-0.1, 2.6, 400)
plot(q(yy), yy, "--", label="inverse interpolant")
grid(), xlabel("$x$"), ylabel("$y$")
legend()
We seek the value of that makes zero. This means evaluating at zero.
x = hstack([x, q(0)])
y = hstack([y, f(x[-1])])
print("x:", x, "\ny:", y)
We repeat the process a few more times.
for k in range(6):
q = poly1d(polyfit(y[-3:], x[-3:], 2))
x = hstack([x, q(0)])
y = hstack([y, f(x[-1])])
print(f"final residual is {y[-1]:.2e}")
Here is the sequence of errors.
from scipy.optimize import root_scalar
r = root_scalar(f, bracket=[0.9, 1]).root
err = x - r
print(err)
The error seems to be superlinear, but subquadratic:
logerr = log(abs(err))
for i in range(len(err) - 1):
print(logerr[i+1] / logerr[i])
Section 4.5¶
Convergence of Newton’s method for systems
A system of nonlinear equations is defined by its residual and Jacobian.
def func(x):
return array([
exp(x[1] - x[0]) - 2,
x[0] * x[1] + x[2],
x[1] * x[2] + x[0]**2 - x[1]
])
def jac(x):
return array([
[-exp(x[1] - x[0]), exp(x[1] - x[0]), 0],
[x[1], x[0], 1],
[2 * x[0], x[2] - 1, x[1]],
])
Our initial guess at a root is the origin.
x1 = zeros(3)
x = FNC.newtonsys(func, jac, x1)
print(x)
The output has one column per iteration, so the last column contains the final Newton estimate. Let’s compute the residual of the last result.
r = x[:, -1]
f = func(r)
print("final residual:", f)
Let’s check the convergence rate:
logerr = [log(norm(x[:, k] - r)) for k in range(x.shape[1] - 1)]
for k in range(len(logerr) - 1):
print(logerr[k+1] / logerr[k])
The ratio is apparently converging toward 2, as expected for quadratic convergence.
Section 4.6¶
Using Levenberg’s method
To solve a nonlinear system, we need to code only the function defining the system, and not its Jacobian.
def func(x):
return array([
exp(x[1] - x[0]) - 2,
x[0] * x[1] + x[2],
x[1] * x[2] + x[0]**2 - x[1]
])
In all other respects usage is the same as for the newtonsys
function.
x1 = zeros(3)
x = FNC.levenberg(func, x1)
print(f"Took {x.shape[1]-1} iterations.")
It’s always a good idea to check the accuracy of the root, by measuring the residual (backward error).
r = x[:, -1]
print("backward error:", norm(func(r)))
Looking at the convergence in norm, we find a convergence rate between linear and quadratic, like with the secant method:
logerr = [log(norm(x[:, k] - r)) for k in range(x.shape[1] - 1)]
for k in range(len(logerr) - 1):
print(logerr[k+1] / logerr[k])
Section 4.7¶
Convergence of nonlinear least squares
We will observe the convergence of Function 4.6.3 for different levels of the minimum least-squares residual. We start with a function mapping from into , and a point that will be near the optimum.
g = lambda x: array([sin(x[0] + x[1]), cos(x[0] - x[1]), exp(x[0] - x[1])])
p = array([1, 1])
The function obviously has a zero residual at . We’ll make different perturbations of that function in order to create nonzero residuals.
for R in [1e-3, 1e-2, 1e-1]:
# Define the perturbed function.
f = lambda x: g(x) - g(p) + R * array([-1, 1, -1]) / sqrt(3)
x = FNC.levenberg(f, [0, 0])
r = x[:, -1]
err = [norm(x[:, j] - r) for j in range(x.shape[1] - 1)]
normres = norm(f(r))
semilogy(err, label=f"R={normres:.2g}")
title("Convergence of Gauss–Newton")
xlabel("iteration"), ylabel("error")
legend();
In the least perturbed case, where the minimized residual is less than 10-3, the convergence is plausibly quadratic. At the next level up, the convergence starts similarly but suddenly stagnates for a long time. In the most perturbed case, the quadratic phase is nearly gone and the overall shape looks linear.
Nonlinear data fitting
m = 25
V, Km = 2, 0.5
s = linspace(0.05, 6, m)
model = lambda x: V * x / (Km + x)
w = model(s) + 0.15 * cos(2 * exp(s / 16) * s) # noise added
fig, ax = subplots()
ax.scatter(s, w, label="data")
ax.plot(s, model(s), 'k--', label="unperturbed model")
xlabel("s"), ylabel("w")
legend()
The idea is to pretend that we know nothing of the origins of this data and use nonlinear least squares to find the parameters in the theoretical model function . In (4.7.2), the variable plays the role of , and plays the role of .
Putting comma-separated values on the left of an assignment will destructure the right-hand side, drawing individual assignments from entries of a vector, for example.
def misfit(c):
V, Km = c # rename components for clarity
f = V * s / (Km + s) - w
return f
In the Jacobian the derivatives are with respect to the parameters in .
def misfitjac(x):
V, Km = x # rename components for clarity
J = zeros([m, 2])
J[:, 0] = s / (Km + s) # d/d(V)
J[:, 1] = -V * s / (Km + s)**2 # d/d(Km)
return J
x1 = [1, 0.75]
x = FNC.newtonsys(misfit, misfitjac, x1)
V, Km = x[:, -1] # final values
print(f"estimates are V = {V:.3f}, Km = {Km:.3f}")
The final values are reasonably close to the values , that we used to generate the noise-free data. Graphically, the model looks close to the original data.
# since V and Km have been updated, model() is too
ax.plot(s, model(s), label="nonlinear fit")
For this particular model, we also have the option of linearizing the fit process. Rewrite the model as
for the new fitting parameters and . This corresponds to the misfit function whose entries are
for . Although this misfit is nonlinear in and , it’s linear in the unknown parameters α and β. This lets us pose and solve it as a linear least-squares problem.
from numpy.linalg import lstsq
A = array( [[1 / s[i], 1.0] for i in range(len(s))] )
z = lstsq(A, 1 / w, rcond=None)[0]
alpha, beta = z
print("alpha:", alpha, "beta:", beta)
The two fits are different; they do not optimize the same quantities.
linmodel = lambda x: 1 / (beta + alpha / x)
ax.plot(s, linmodel(s), label="linear fit")
ax.legend()
fig
The truly nonlinear fit is clearly better in this case. It optimizes a residual for the original measured quantity rather than a transformed one we picked for algorithmic convenience.