Skip to article frontmatterSkip to article content

Interpolation-based methods

From a practical standpoint, one of the biggest drawbacks of Newton’s method is the requirement to supply ff' in Function 4.3.2. It is both a programming inconvenience and a step that requires computational time. We can avoid using ff', however, by making a simple but easily overlooked observation:

Let’s call this the principle of approximate approximation.

In the Newton context, the principle of approximate approximation begins with the observation that the use of ff' is linked to the construction of a linear approximation q(x)q(x) equivalent to a tangent line. The root of q(x)q(x) is used to define the next iterate in the sequence. We can avoid calculating the value of ff' by choosing a different linear approximation.

The example in Demo 4.4.1 demonstrates the secant method. In the secant method, one finds the root of the linear approximation through the two most recent root estimates. That is, given previous approximations x1,,xkx_1,\ldots,x_k, define the linear model function as the line through (xk1,f(xk1))\bigl(x_{k-1},f(x_{k-1})\bigr) and (xk,f(xk))\bigl(x_k,f(x_k)\bigr):

q(x)=f(xk)+f(xk)f(xk1)xkxk1(xxk).q(x) = f(x_k) + \frac{f(x_k)-f(x_{k-1})}{x_k-x_{k-1}}(x-x_k).

Solving q(xk+1)=0q(x_{k+1})=0 for xk+1x_{k+1} gives the iteration formula.

Our implementation of the secant method is given in Function 4.4.2.

4.4.1Convergence

Graphically, a secant line usually looks like a less accurate model of ff than the tangent line. How will that affect the convergence?

As before, let ϵk=xkr\epsilon_k = x_k-r be the errors in the successive root approximations, and assume that rr is a simple root. If the initial errors are small, then a tedious but straightforward Taylor expansion shows that, to lowest order,

ϵk+112f(r)f(r)ϵkϵk1.\epsilon_{k+1} \approx \frac{1}{2}\frac{f''(r)}{f'(r)} \epsilon_k \epsilon_{k-1}.

If we make an educated guess that

ϵk+1=c(ϵk)α,ϵk=c(ϵk1)α,,α>0,\epsilon_{k+1} = c (\epsilon_k)^\alpha, \quad \epsilon_k = c (\epsilon_{k-1})^\alpha, \ldots, \qquad \alpha>0,

then (4.4.3) becomes

[ϵk1α]αCϵk1α+1\left[ \epsilon_{k-1}^{\alpha} \right]^{\,\alpha} \approx C \epsilon_{k-1}^{\alpha+1}

for an unknown constant CC. Treating the approximation as an equality, this becomes solvable if and only if the exponents match, i.e., α2=α+1\alpha^2 = \alpha+1. The only positive root of this equation is the golden ratio,

α=1+521.618. \alpha = \frac{1+\sqrt{5}}{2} \approx 1.618.

Hence the errors in the secant method converge like ϵk+1=c(ϵk)α\epsilon_{k+1} = c (\epsilon_k)^\alpha for 1<α<21<\alpha<2.

Quadratic convergence is a particular case of superlinear convergence. Roughly speaking, we expect

logϵk+1α(logϵk)+logL,logϵk+1logϵkα+logLlogϵkα,\begin{align*} \log |\epsilon_{k+1}| & \approx \alpha (\log |\epsilon_k|) + \log L, \\ \frac{\log |\epsilon_{k+1}|}{\log |\epsilon_k|} & \approx \alpha + \frac{\log L}{\log |\epsilon_k|} \to \alpha, \end{align*}

as kk\to\infty.

In terms of the error as a function of the iteration number kk, the secant method converges at a rate strictly between linear and quadratic, which is slower than Newton’s method. But error versus iteration count may not be the best means of comparison.

Often we analyze rootfinding methods by assuming that the bulk of computing time is spent evaluating the user-defined functions ff and ff'. (Our simple examples and exercises mostly don’t support this assumption, but many practical applications do.) In this light, we see that Newton’s method requires two evaluations, f(xk)f(x_k) and f(xk)f'(x_k), for each iteration. The secant method, on the other hand, while it uses the two function values f(xk)f(x_k) and f(xk1)f(x_{k-1}) at each iteration, only needs to compute a single new one. Note that Function 4.4.2 keeps track of one previous function value rather than recomputing it.

Now suppose that ϵk=δ|\epsilon_k|=\delta. Roughly speaking, two units of work (i.e., function evaluations) in Newton’s method brings us to an error of δ2\delta^2. If one spreads out the improvement in the error evenly across the two steps, using

δ2=(δ2) ⁣2,\delta^2 = \bigl( \delta^{\sqrt{2}} \bigr)^{\!\sqrt{2}},

it seems reasonable to say that the rate of convergence in Newton per function evaluation is 21.41\sqrt{2}\approx 1.41. This is actually less than the comparable rate of about 1.62 for the secant method.

Not only is the secant method easier to apply than Newton’s method in practice, it’s also more efficient—a rare win-win!

4.4.2Inverse interpolation

At each iteration, the secant method constructs a linear model function that interpolates the two most recently found points on the graph of ff. Two points determine a straight line, so this seems like a sensible choice. But as the iteration progresses, why use only the two most recent points? What would it mean to use more of them?

If we interpolate through three points by a polynomial, we get a unique quadratic function. Unfortunately, a parabola may have zero, one, or two crossings of the xx-axis, potentially leaving some doubt as to how to define the next root estimate. On the other hand, if we turn a parabola on its side, we get a graph that intersects the xx-axis exactly once, which is ideal for defining the next root estimate.

This leads to the idea of defining q(y)q(y) as the quadratic interpolant to the points (yk2,xk2)(y_{k-2},x_{k-2}), (yk1,xk1)(y_{k-1},x_{k-1}), and (yk,xk)(y_k,x_k), where yi=f(xi)y_i=f(x_i) for all ii, and setting xk+1=q(0)x_{k+1}=q(0). The process defined in this way (given three initial estimates) is called inverse quadratic interpolation. Rather than deriving lengthy formulas for it here, we demonstrate how to perform inverse quadratic interpolation using fit to perform the interpolation step.

4.4.3Bracketing

Like Newton’s method, the secant and inverse quadratic interpolation methods cannot guarantee convergence. One final new idea is needed to make a (nearly) foolproof algorithm.

If ff is continuous on the interval [a,b][a,b] and f(a)f(b)<0f(a)f(b)<0—that is, ff changes sign on the interval—then ff must have at least one root in the interval, due to the Intermediate Value Theorem from calculus. If we come up with a new root estimate c(a,b)c\in(a,b), then whatever sign f(c)f(c) is, it is different from the sign at one of the endpoints. (Of course, if f(c)f(c) is zero, we are done!) So either [a,c][a,c] or [c,b][c,b] is guaranteed to have a root too, and in this way we can maintain not just individual estimates but an interval that always contains a root.

The best algorithms blend the use of fast-converging methods with the guarantee provided by a bracket. For example, say that an iteration begins with a bracketing interval. Make a list of the inverse quadratic estimate, the secant estimate, and the midpoint of the current interval, and pick the first member of the list that lies within the current interval. Replace the interval with the bracketing subinterval, and start a new iteration. This is the idea behind Brent’s method, which is a very successful rootfinding algorithm.

4.4.4Exercises

For each of Exercises 1–3, do the following steps.

(a) ✍ Rewrite the equation into the standard form for rootfinding, f(x)=0f(x) = 0.

(b) ⌨ Make a plot of ff over the given interval and determine how many roots lie in the interval.

(c) ⌨ Use nlsolve with ftol=1e-15 to find a reference value for each root.

(d) ⌨ Determine a bracketing interval for each root. Then use Function 4.4.2, starting with the endpoints of the bracketing interval, to find each root.

(e) ⌨ For one of the roots, use the errors in the Newton sequence to determine numerically whether the convergence is apparently between linear and quadratic.

  1. x2=exx^2=e^{-x}, over [2,2][-2,2]

  2. 2x=tanx2x = \tan x, over [0.2,1.4][-0.2,1.4]

  3. ex+1=2+xe^{x+1}=2+x, over [2,2][-2,2]


  4. ⌨ Use a plot to approximately locate all the roots of f(x)=x2sin(x)f(x)=x^{-2}-\sin(x) in the interval [0.5,10][0.5,10]. Then find a pair of initial points for each root such that Function 4.4.2 converges to that root.

  5. ✍ Show analytically that the secant method converges in one step for a linear function, regardless of the initialization.

  6. ✍ In general, the secant method formula (4.4.2) cannot be applied if xk=xk1x_{k}=x_{k-1}. However, suppose that f(x)=ax2+bx+cf(x)=ax^2+bx+c for constants aa, bb, and cc. Show that in this case the formula can be simplified to one that is well defined when xk=xk1x_{k}=x_{k-1}. Then show that the resulting xk+1x_{k+1} is the same as the result of one step of Newton’s method applied to ff at xkx_k.

  7. ✍ Let f(x)=x2f(x)=x^2. Show that if (1/x1)(1/x_1) and (1/x2)(1/x_2) are positive integers, and the secant iteration is applied, then the sequence 1/x1,1/x2,1/x3,1/x_1,1/x_2,1/x_3,\ldots is a Fibonacci sequence, i.e., satisfying xk+1=xk+xk1x_{k+1}=x_k+x_{k-1}.

  8. ✍ Provide the details that show how to derive (4.4.3) from (4.4.2).

  9. ⌨ Write a function iqi(f,x₁,x₂,x₃) that performs inverse quadratic interpolation for finding a root of ff, given three initial estimates. To find the quadratic polynomial q(y)q(y) passing through the three most recent points, use fit. Test your function on the function in Exercise 1 from this section.