Skip to article frontmatterSkip to article content

Fixed-point iteration

In this section, we consider the alternative form of the rootfinding problem known as the fixed-point problem.

Given ff for rootfinding, we could define g(x)=xf(x)g(x)=x-f(x), and then f(r)=0f(r)=0 implies g(r)=rg(r)=r and vice versa. There are infinitely many ways to make this transformation, such as g(x)=x+cf(x)g(x)=x+cf(x) for any constant cc. The process can be reversed, too. Given g(x)g(x), we could define f(x)=xg(x)f(x)=x-g(x), and then g(p)=pg(p)=p implies f(p)=0f(p)=0.

There is an extraordinarily simple way to try to find a fixed point of any given g(x)g(x).

This is our first example of an iterative algorithm that never quite gets to the answer, even if we use exact numbers. The idea is to generate a sequence of values that one hopes will converge to the correct result, and stop when we are satisfied that we are close enough to the limit.

4.2.1Series analysis

In Demo 4.2.1, the two computed iterations differ only in the choice of x1x_1. In the first case we evidently generated a sequence that converged to one of the fixed points. In the second case, however, the generated sequence diverged.[1] The easiest way to uncover the essential difference between the two cases is to use a Taylor series expansion.

Suppose a fixed point pp is the desired limit of an iteration x1,x2,x_1,x_2,\ldots. It’s often easier to express quantities in terms of the error sequence ϵ1,ϵ2,,\epsilon_1,\epsilon_2,\ldots, where ϵk=xkp\epsilon_k=x_k-p. Starting from (4.2.1), we have

ϵk+1+p=g(ϵk+p)=g(p)+g(p)ϵk+12g(p)ϵk2+,\begin{split} \epsilon_{k+1}+p = g( \epsilon_{k}+p ) = g(p) + g'(p) \epsilon_k + \frac{1}{2}g''(p) \epsilon_k^2 + \cdots, \end{split}

assuming that gg has at least two continuous derivatives. But by definition, g(p)=pg(p)=p, so

ϵk+1=g(p)ϵk+O(ϵk2). \epsilon_{k+1} = g'(p) \epsilon_k + O(\epsilon_k^2).

If the iteration is to converge to pp, the errors must approach zero. In this case we can neglect the second-order term and conclude that ϵk+1g(p)ϵk\epsilon_{k+1} \approx g'(p) \epsilon_k. This is consistent with convergence if g(p)<1|g'(p)|<1. However, if g(p)>1|g'(p)| >1, we are led to the conclusion that the errors must grow, not vanish, even if they start quite small.

4.2.2Linear convergence

In computation we usually want to know not just whether an iteration converges but also the rate at which convergence occurs, i.e., how quickly the errors approach zero. Other things being equal, faster convergence is preferred to slower convergence, as it usually implies that the computation will take less time to achieve a desired accuracy.

The prediction of the series analysis above is that if the fixed point iteration converges, the errors approximately satisfy ϵk+1=σϵk|\epsilon_{k+1}| = \sigma|\epsilon_k|, for σ=g(p)<1\sigma = |g'(p)| < 1. This is a well-known type of convergence.

If we suppose that the ratios in (4.2.4) all equal σ (i.e., perfect linear convergence), then ϵk=Cσk|\epsilon_k| = C \sigma^k. Taking logs, we get

logϵk=k(logσ)+(logC). \log |\epsilon_k| = k(\log \sigma) + (\log C).

This is in the form logϵk=αk+β\log |\epsilon_k| = \alpha k + \beta, which is a linear relationship.

4.2.3Contraction maps

The convergence condition σ=g(p)<1\sigma=|g'(p)|<1 derived by series expansion is a special case of a more general condition.

It can be shown that a function satisfying (4.2.6) is continuous in SS. If L<1L<1, we call gg a contraction mapping because distances between points all decrease after an application of gg. This situation leads to a major result about fixed points.

From the Fundamental Theorem of Calculus, which asserts that g(s)g(t)=stg(x)dxg(s)-g(t)=\int_s^t g'(x)\, dx, it’s easy to conclude that an upper bound of g(x)L|g'(x)|\le L for all xx results in (4.2.6). Hence:

There are stronger and more general statements of Theorem 4.2.1. For instance, it’s possible to show that all initial x1x_1 that are sufficiently close to the fixed point will lead to convergence of the iteration. Algorithmically the main virtue of the fixed point iteration is that it is incredibly easy to apply. However, as we are about to discover, it’s far from the fastest option.

4.2.4Exercises

  1. ✍ In each case, show that the given g(x)g(x) has a fixed point at the given pp and use (4.2.3) to show that fixed point iteration can converge to it.

    (a) g(x)=1+x19x2g(x) = 1 + x - \frac{1}{9}x^2, p=3p=3

    (b) g(x)=π+14sin(x)g(x) = \pi + \frac{1}{4}\sin(x), p=πp=\pi

    (c) g(x)=x+1tan(x/4)g(x) = x+1-\tan(x/4), p=πp=\pi

  2. ⌨ For each case in the preceding problem, apply 25 fixed point iterations and use a log-linear graph of the error to verify linear convergence. Then use numerical values of the error to determine an approximate value for σ in (4.2.4).

  3. ✍ In each case, show that the given g(x)g(x) has a fixed point at the given pp. Then determine analytically whether the fixed point iteration could converge to that point given a close enough starting value.

    (a) g(x)=3+xx2g(x) = 3+x-x^2, p=3p=\sqrt{3}

    (b) g(x)=1+xg(x) = \sqrt{1+x}, p=(1+5)/2p=(1+\sqrt{5})/2

    (c) g(x)=1+xg(x) = -\sqrt{1+x}, p=(15)/2p=(1-\sqrt{5})/2

    (d) g(x)=x+1tan(πx)g(x) = x+1-\tan(\pi x), p=1/4p=1/4

  4. In Demo 4.2.1 we defined g(x)=xf(x)g(x)=x-f(x) to find a fixed point of the polynomial f(x)=x24x+3.5f(x)=x^2 - 4x + 3.5.

    (a) ✍ Why does the iteration spiral in to the fixed point, instead of approaching it monotonically? (Refer to the series analysis.)

    (b) ✍ Show that if g^(x)=(x2+3.5)/4\hat{g}(x) = (x^2+3.5)/4, then any fixed point of g^\hat{g} is a root of ff.

    (c) ⌨ Use fixed point iteration on g^\hat{g} to try to find both roots of ff, and note which case(s), if either, converge.

    (d) ✍ Use (4.2.3) to explain the success/failure in part (c) for each fixed point.

  5. ✍ The mmth root of a positive real number aa is a fixed point of the function

    g(x)=axm1.g(x) = \frac{a}{x^{m-1}}.

    For what integer values of m>1m>1 will the fixed point iteration for gg converge (for close enough initial guesses)?

  6. (a) ✍ Show that p=1/3p=1/3 is a fixed point of g(x)=2x3x2g(x) = 2x-3x^2.

    (b) ✍ Find g(1/3)g'(1/3). How does this affect (4.2.3)?

    (c) ⌨ Do an experiment with fixed point iteration on gg to converge to p=1/3p=1/3. Is the convergence a straight line on a log-linear plot?

  7. ✍ Consider the iteration

    xk+1=xkf(xk)c,k=0,1,.x_{k+1} = x_k - \frac{f(x_k)}{c}, \qquad k=0,1,\ldots.

    Suppose f(p)=0f(p)=0 and that f(p)>0f'(p)>0 exists. Find one or more conditions on cc such that the iteration converges to pp.

Footnotes
  1. We can only ever generate a finite sample from an infinite sequence, which in principle does not guarantee anything whatsoever about the limit or divergence of that sequence. However, in practical computing one usually assumes that well-established trends in the sequence will continue, and we complement observed experience with rigorous theory where possible.