Skip to article frontmatterSkip to article content

Chapter 1

Functions

Horner’s algorithm for evaluating a polynomial
horner.py
1
2
3
4
5
6
7
8
9
10
11
12
def horner(c,x):
    """
    horner(c,x)

    Evaluate a polynomial whose coefficients are given in descending order 
    in c, at the point x, using Horner's rule.
    """
    n = len(c)
    y = c[0]
    for k in range(1, n):
        y = x * y + c[k]   
    return y

Examples

1.1 Floating-point numbers

Example 1.1.2

Recall the grade-school approximation to the number π.

p = 22/7
print(p)
3.142857142857143

Not all the digits displayed for p are the same as those of π.

print(pi)
3.141592653589793

The absolute and relative accuracies of the approximation are as follows:

print(f"absolute accuracy: {abs(p - pi)}")
absolute accuracy: 0.0012644892673496777
rel_acc = abs(p - pi) / pi
print("relative accuracy: {rel_acc:.4e}")
relative accuracy: {rel_acc:.4e}

Here we calculate the number of accurate digits in p:

print(f"accurate digits: {-log10(rel_acc):.1f}")
accurate digits: 3.4
Example 1.1.3

Python has native int and float types.

print(f"The type of {1} is {type(1)}")
print(f"The type of {float(1)} is {type(1.0)}")
The type of 1 is <class 'int'>
The type of 1.0 is <class 'float'>

The numpy package has its own float types:

one = float64(1)
print(f"The type of {one} is {type(one)}")
The type of 1.0 is <class 'numpy.float64'>

Both float and float64 are double precision, using 64 binary bits per value. Although it is not normally necessary to do so, we can deconstruct a float into its significand and exponent:

x = 3.14
mantissa, exponent = frexp(x)
print(f"significand: {mantissa * 2}, exponent: {exponent - 1}")
significand: 1.57, exponent: 1
mantissa, exponent = frexp(x / 8)
print(f"significand: {mantissa * 2}, exponent: {exponent - 1}")
significand: 1.57, exponent: -2

The spacing between floating-point values in [2n,2n+1)[2^n,2^{n+1}) is 2nϵmach2^n \epsilon_\text{mach}, where ϵmach\epsilon_\text{mach} is machine epsilon, given here for double precision:

mach_eps = finfo(float).eps
print(f"machine epsilon is {mach_eps:.4e}")
machine epsilon is 2.2204e-16

Because double precision allocates 52 bits to the significand, the default value of machine epsilon is 2-52.

print(f"machine epsilon is 2 to the power {log2(mach_eps)}")
machine epsilon is 2 to the power -52.0

A common mistake is to think that ϵmach\epsilon_\text{mach} is the smallest floating-point number. It’s only the smallest relative to 1. The correct perspective is that the scaling of values is limited by the exponent, not the significand. The actual range of positive values in double precision is

finf = finfo(float)
print(f"range of positive values: [{finf.tiny}, {finf.max}]")
range of positive values: [2.2250738585072014e-308, 1.7976931348623157e+308]

For the most part you can mix integers and floating-point values and get what you expect.

1/7
0.14285714285714285
37.3 + 1
38.3
2**(-4)
0.0625

You can convert a floating value to an integer by wrapping it in int.

int(3.14)
3
Example 1.1.4

There is no double precision number between 1 and 1+εmach1+\varepsilon_\text{mach}. Thus, the following difference is zero despite its appearance.

eps = finfo(float).eps
e = eps/2
print((1.0 + e) - 1.0)
0.0

However, 1εmach/21-\varepsilon_\text{mach}/2 is a double precision number, so it and its negative are represented exactly:

print(1.0 + (e - 1.0))
1.1102230246251565e-16

This is now the “correct” result. But we have found a rather shocking breakdown of the associative law of addition!

1.2 Problems and conditioning

Example 1.2.5

The polynomial p(x)=13(x1)(x1ϵ)p(x) = \frac{1}{3}(x-1)(x-1-\epsilon) has roots 1 and 1+ϵ1+\epsilon. For small values of ε, the roots are ill-conditioned.

ep = 1e-6   
a, b, c = 1/3, (-2 - ep) / 3, (1 + ep) / 3   # coefficients of p

Here are the roots as computed by the quadratic formula.

d = sqrt(b**2 - 4*a*c)
r1 = (-b - d) / (2*a)
r2 = (-b + d) / (2*a)
print(r1, r2)
0.9999999998251499 1.0000010001748503

The display of r2 suggests that the last five digits or so are inaccurate. The relative error in the value is

print(abs(r1 - 1) / abs(1))
print(abs(r2 - (1 + ep)) / abs(1 + ep))
1.7485013437124053e-10
1.748501815656639e-10

The condition number of each root is

κ(ri)=rir1r21ϵ.\kappa(r_i) = \frac{|r_i|}{|r_1-r_2|} \approx \frac{1}{\epsilon}.

Thus, relative error in the data at the level of roundoff can grow in the result to be roughly

print(finfo(float).eps / ep)
2.220446049250313e-10

This matches the observation pretty well.

1.3 Algorithms

Example 1.3.2

Here we show how to use horner to evaluate a polynomial. First, we have to ensure that the book’s package is imported.

import fncbook as FNC

Here is the help string for the function:

help(FNC.horner)
Help on function horner in module fncbook.chapter01:

horner(c, x)
    horner(c,x)

    Evaluate a polynomial whose coefficients are given in descending order
    in c, at the point x, using Horner's rule.

We now define a vector of the coefficients of p(x)=(x1)3=x33x2+3x1p(x)=(x−1)^3=x^3−3x^2+3x−1, in descending degree order. Note that the textbook’s functions are all in a namespace called FNC, to help distinguish them from other Python commands and modules.

c = array([1, -3, 3, -1])
print(FNC.horner(c, 1.6))
0.21600000000000041

The above is the value of p(1.6)p(1.6), up to a rounding error.

1.4 Stability

Example 1.4.1

We apply the quadratic formula to find the roots of a quadratic via (1.4.1).

a = 1;  b = -(1e6 + 1e-6);  c = 1;
x1 = (-b + sqrt(b**2 - 4*a*c)) / 2*a
x2 = (-b - sqrt(b**2 - 4*a*c)) / 2*a
print(x1, x2)
1000000.0 1.00000761449337e-06

The first value is correct to all stored digits, but the second has fewer than six accurate digits:

error = abs(1e-6 - x2) / 1e-6 
print(f"There are {-log10(error):.2f} accurate digits.")
There are 5.12 accurate digits.

The instability is easily explained. Since a=c=1a=c=1, we treat them as exact numbers. First, we compute the condition numbers with respect to bb for each elementary step in finding the “good” root:

CalculationResultκ
u1=b2u_1 = b^21.000000000002000×10121.000000000002000\times 10^{12}2
u2=u14u_2 = u_1 - 49.999999999980000×10119.999999999980000\times 10^{11}1.00\approx 1.00
u3=u2u_3 = \sqrt{u_2}999999.99999900001/2
u4=u3bu_4 = u_3 - b20000000.500\approx 0.500
u5=u4/2u_5 = u_4/210000001

Using (1.2.9), the chain rule for condition numbers, the conditioning of the entire chain is the product of the individual steps, so there is essentially no growth of relative error here. However, if we use the quadratic formula for the “bad” root, the next-to-last step becomes u4=(u3)bu_4=(-u_3) - b, and now κ=u3/u45×1011\kappa=|u_3|/|u_4|\approx 5\times 10^{11}. So we can expect to lose 11 digits of accuracy, which is what we observed. The key issue is the subtractive cancellation in this one step.

Example 1.4.2

We repeat the rootfinding experiment of Demo 1.4.1 with an alternative algorithm.

a = 1;  b = -(1e6 + 1e-6);  c = 1;

First, we find the “good” root using the quadratic formula.

x1 = (-b + sqrt(b**2 - 4*a*c)) / 2*a

Then we use the identity x1x2=cax_1x_2=\frac{c}{a} to compute the smaller root:

x2 = c / (a * x1)
print(x1, x2)
1000000.0 1e-06

To be sure we have an accurate result, we compute its relative error.

print(abs(x2 - 1e-6) / 1e-6)
0.0
Example 1.4.3

Our first step is to construct a polynomial with six known roots.

r = [-2, -1, 1, 1, 3, 6]
p = poly(r)
print(p)
[  1.  -8.   6.  44. -43. -36.  36.]

Now we use a standard numerical method for finding those roots, pretending that we don’t know them already. This corresponds to y~\tilde{y} in Definition 1.4.1.

r_computed = sort(roots(p))
print(r_computed)
[-2.+0.00000000e+00j -1.+0.00000000e+00j  1.-1.31638218e-08j
  1.+1.31638218e-08j  3.+0.00000000e+00j  6.+0.00000000e+00j]

Here are the relative errors in each of the computed roots.

print(abs(r - r_computed) / r)
[-5.55111512e-16 -2.22044605e-16  1.31638218e-08  1.31638218e-08
  4.44089210e-16  1.33226763e-15]

It seems that the forward error is acceptably close to machine epsilon for double precision in all cases except the double root at x=1x=1. This is not a surprise, though, given the poor conditioning at such roots.

Let’s consider the backward error. The data in the rootfinding problem is the polynomial coefficients. We can apply poly to find the coefficients of the polynomial (that is, the data) whose roots were actually computed by the numerical algorithm. This corresponds to x~\tilde{x} in Definition 1.4.1.

p_computed = poly(r_computed)
print(p_computed)
[  1.  -8.   6.  44. -43. -36.  36.]

We find that in a relative sense, these coefficients are very close to those of the original, exact polynomial:

print(abs(p - p_computed) / p)
[ 0.00000000e+00 -1.11022302e-15  2.96059473e-15  1.45338287e-15
 -3.63533493e-15 -1.38161088e-15  3.75008666e-15]

In summary, even though there are some computed roots relatively far from their correct values, they are nevertheless the roots of a polynomial that is very close to the original.