Skip to article frontmatterSkip to article content

Chapter 1

Functions

Horner’s algorithm for evaluating a polynomial
horner.jl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
"""
    horner(c, x)

Evaluate a polynomial whose coefficients are given in ascending
order in `c`, at the point `x`, using Horner's rule.
"""
function horner(c, x)
    n = length(c)
    y = c[n]
    for k in n-1:-1:1
        y = x * y + c[k]
    end
    return y
end

Examples

1.1 Floating-point numbers

Example 1.1.2

Recall the grade-school approximation to the number π.

@show p = 22/7;
p = 22 / 7 = 3.142857142857143

Not all the digits displayed for p are the same as those of π.

@show float(π);
float(π) = 3.141592653589793

The absolute and relative accuracies of the approximation are as follows.

acc = abs(p-π)
println("absolute accuracy = $acc")
println("relative accuracy = $(acc/π)")
absolute accuracy = 0.0012644892673496777
relative accuracy = 0.0004024994347707008

Here we calculate the number of accurate digits in p.

println("Number of accurate digits = $(-log10(acc/π))")
Number of accurate digits = 3.3952347251747166

This last value could be rounded down by using floor.

Example 1.1.3

In Julia, 1 and 1.0 are different values, because they have different types:

@show typeof(1);
@show typeof(1.0);
typeof(1) = Int64

typeof(1.0) = Float64

The standard choice for floating-point values is Float64, which is double precision using 64 binary bits. We can see all the bits by using bitstring.

bitstring(1.0)
"0011111111110000000000000000000000000000000000000000000000000000"

The first bit determines the sign of the number:

[bitstring(1.0), bitstring(-1.0)]
2-element Vector{String}: "0011111111110000000000000000000000000000000000000000000000000000" "1011111111110000000000000000000000000000000000000000000000000000"

The next 11 bits determine the exponent (scaling) of the number, and so on.

[bitstring(1.0), bitstring(2.0)]
2-element Vector{String}: "0011111111110000000000000000000000000000000000000000000000000000" "0100000000000000000000000000000000000000000000000000000000000000"

The sign bit, exponent, and significand in (1.1.1) are all directly accessible.

x = 3.14
@show sign(x), exponent(x), significand(x);
(sign(x), exponent(x), significand(x)) = (1.0, 1, 1.57)

x = x / 8
@show sign(x), exponent(x), significand(x);
(sign(x), exponent(x), significand(x)) = (1.0, -2, 1.57)

The spacing between floating-point values in [2n,2n+1)[2^n,2^{n+1}) is 2nϵmach2^n \epsilon_\text{mach}, where ϵmach\epsilon_\text{mach} is machine epsilon. You can get its value from the eps function in Julia. By default, it returns the value for double precision.

eps()
2.220446049250313e-16

Because double precision allocates 52 bits to the significand, the default value of machine epsilon is 2-52.

log2(eps())
-52.0

The spacing between adjacent floating-point values is proportional to the magnitude of the value itself. This is how relative precision is kept roughly constant throughout the range of values. You can get the adjusted spacing by calling eps with a value.

eps(1.618)
2.220446049250313e-16
eps(161.8)
2.842170943040401e-14
nextfloat(161.8)
161.80000000000004
ans - 161.8
2.842170943040401e-14

A common mistake is to think that ϵmach\epsilon_\text{mach} is the smallest floating-point number. It’s only the smallest relative to 1. The correct perspective is that the scaling of values is limited by the exponent, not the mantissa. The actual range of positive values in double precision is

@show floatmin(), floatmax();
(floatmin(), floatmax()) = (2.2250738585072014e-308, 1.7976931348623157e308)

For the most part you can mix integers and floating-point values and get what you expect.

1/7
0.14285714285714285
37.3 + 1
38.3
2^(-4)
0.0625

There are some exceptions. A floating-point value can’t be used as an index into an array, for example, even if it is numerically equal to an integer. In such cases you use Int to convert it.

@show 5.0, Int(5.0);
(5.0, Int(5.0)) = (5.0, 5)

If you try to convert a noninteger floating-point value into an integer you get an InexactValue error. This occurs whenever you try to force a type conversion that doesn’t make clear sense.

Example 1.1.4

There is no double-precision number between 1 and 1+ϵmach1+\epsilon_\text{mach}. Thus the following difference is zero despite its appearance.

e = eps()/2
(1.0 + e) - 1.0
0.0

However, the spacing between floats in [1/2,1)[1/2,1) is ϵmach/2\macheps/2, so both 1ϵmach/21-\macheps/2 and its negative are represented exactly:

1.0 + (e - 1.0)
1.1102230246251565e-16

This is now the expected result. But we have found a rather shocking breakdown of the associative law of addition!

1.2 Problems and conditioning

Example 1.2.5

The polynomial p(x)=13(x1)(x1ϵ)p(x) = \frac{1}{3}(x-1)(x-1-\epsilon) has roots 1 and 1+ϵ1+\epsilon. For small values of ε, the roots are ill-conditioned.

ϵ = 1e-6   # type \epsilon and then press Tab
a,b,c = 1/3,(-2-ϵ)/3,(1+ϵ)/3   # coefficients of p
(0.3333333333333333, -0.666667, 0.33333366666666664)

Here are the roots as computed by the quadratic formula.

d = sqrt(b^2 - 4a*c)
r₁ = (-b - d) / (2a)   # type r\_1 and then press Tab
r₂ = (-b + d) / (2a)
(r₁, r₂)
(0.9999999998251499, 1.0000010001748503)

The relative errors in these values are

@show abs(r₁ - 1) / abs(1);
@show abs(r₂ - (1+ϵ)) / abs(1+ϵ);
abs(r₁ - 1) / abs(1) = 1.7485013437124053e-10
abs(r₂ - (1 + ϵ)) / abs(1 + ϵ) = 1.748501815656639e-10

The condition number of each root is

κ(ri)=rir1r21ϵ.\kappa(r_i) = \frac{|r_i|}{|r_1-r_2|} \approx \frac{1}{\epsilon}.

Thus, relative error in the data at the level of roundoff can grow in the result to be roughly

eps() / ϵ
2.220446049250313e-10

This matches the observation pretty well.

1.3 Algorithms

Example 1.3.2

Here we show how to use Function 1.3.1 to evaluate a polynomial. It’s not a part of core Julia, so you need to download and install this text’s package once, and load it for each new Julia session. The download is done by the following lines.

#import Pkg
#Pkg.add("FNCBook");

Once installed, any package can be loaded with the using command, as follows.

#using FundamentalsNumericalComputation
using FNCFunctions
FNC = FNCFunctions

For convenience, this package also imports many other packages used throughout the book and makes them available as though you had run a using command for each of them.

Returning to horner, let us define a vector of the coefficients of p(x)=(x1)3=x33x2+3x1p(x)=(x-1)^3=x^3-3x^2+3x-1, in ascending degree order.

c = [-1, 3, -3, 1]
4-element Vector{Int64}: -1 3 -3 1

In order to avoid clashes between similarly named functions, Julia has boxed all the book functions into a namespace called FNC. We use this namespace whenever we invoke one of the functions.

FNC.horner(c, 1.6)
0.21600000000000041

The above is the value of p(1.6)p(1.6).

While the namespace does lead to a little extra typing, a nice side effect of using this paradigm is that if you type FNC. (including the period) and hit the Tab key, you will see a list of all the functions known in that namespace.

The multi-line string at the start of Function 1.3.1 is documentation, which we can access using ?FNC.horner.

1.4 Stability

Example 1.4.1

We apply the quadratic formula to find the roots of a quadratic via (1.4.1).

a = 1;  b = -(1e6 + 1e-6);  c = 1;
@show x₁ = (-b + sqrt(b^2 - 4a*c)) / 2a;
@show x₂ = (-b - sqrt(b^2 - 4a*c)) / 2a;
x₁ = (-b + sqrt(b ^ 2 - (4a) * c)) / (2a) = 1.0e6
x₂ = (-b - sqrt(b ^ 2 - (4a) * c)) / (2a) = 1.00000761449337e-6

The first value is correct to all stored digits, but the second has fewer than six accurate digits:

error = abs(1e-6 - x₂) / 1e-6 
@show accurate_digits = -log10(error);
accurate_digits = -(log10(error)) = 5.118358987126217

The instability is easily explained. Since a=c=1a=c=1, we treat them as exact numbers. First, we compute the condition numbers with respect to bb for each elementary step in finding the “good” root:

CalculationResultκ
u1=b2u_1 = b^21.000000000002000×10121.000000000002000\times 10^{12}2
u2=u14u_2 = u_1 - 49.999999999980000×10119.999999999980000\times 10^{11}1.00\approx 1.00
u3=u2u_3 = \sqrt{u_2}999999.99999900001/2
u4=u3bu_4 = u_3 - b20000000.500\approx 0.500
u5=u4/2u_5 = u_4/210000001

Using (1.2.9), the chain rule for condition numbers, the conditioning of the entire chain is the product of the individual steps, so there is essentially no growth of relative error here. However, if we use the quadratic formula for the “bad” root, the next-to-last step becomes u4=(u3)bu_4=(-u_3) - b, and now κ=u3/u45×1011\kappa=|u_3|/|u_4|\approx 5\times 10^{11}. So we can expect to lose 11 digits of accuracy, which is what we observed. The key issue is the subtractive cancellation in this one step.

Example 1.4.2

We repeat the rootfinding experiment of Demo 1.4.1 with an alternative algorithm.

a = 1;  b = -(1e6 + 1e-6);  c = 1;

First, we find the “good” root using the quadratic formula.

@show x₁ = (-b + sqrt(b^2 - 4a*c)) / 2a;
x₁ = (-b + sqrt(b ^ 2 - (4a) * c)) / (2a) = 1.0e6

Then we use the identity x1x2=cax_1x_2=\frac{c}{a} to compute the smaller root:

@show x₂ = c / (a * x₁);
x₂ = c / (a * x₁) = 1.0e-6

As you see in this output, Julia often suppresses trailing zeros in a decimal expansion. To be sure we have an accurate result, we compute its relative error.

abs(x₂ - 1e-6) / 1e-6
0.0
Example 1.4.3

For this example we will use the Polynomials package, which is installed by the FNC package.

Our first step is to construct a polynomial with six known roots.

using Polynomials
r = [-2.0, -1, 1, 1, 3, 6]
p = fromroots(r)
Loading...

Now we use a standard numerical method for finding those roots, pretending that we don’t know them already. This corresponds to y~\tilde{y} in Definition 1.4.1.

r̃ = sort(roots(p))   # type r\tilde and then press Tab
6-element Vector{Float64}: -1.9999999999999993 -1.0000000000000002 0.9999999876576552 1.0000000123423434 3.0000000000000036 5.999999999999993

Here are the relative errors in each of the computed roots.

println("Root errors:") 
@. abs(r - r̃) / r
Root errors:
6-element Vector{Float64}: -3.3306690738754696e-16 -2.220446049250313e-16 1.2342344812843464e-8 1.2342343369553532e-8 1.1842378929335002e-15 1.1842378929335002e-15

It seems that the forward error is acceptably close to machine epsilon for double precision in all cases except the double root at x=1x=1. This is not a surprise, though, given the poor conditioning at such roots.

Let’s consider the backward error. The data in the rootfinding problem is the polynomial coefficients. We can apply fromroots to find the coefficients of the polynomial (that is, the data) whose roots were actually computed by the numerical algorithm. This corresponds to x~\tilde{x} in Definition 1.4.1.

p̃ = fromroots(r̃)
Loading...

We find that in a relative sense, these coefficients are very close to those of the original, exact polynomial:

c,c̃ = coeffs(p), coeffs(p̃)
println("Coefficient errors:") 
@. abs(c - c̃) / c
Coefficient errors:
7-element Vector{Float64}: 1.7763568394002505e-15 -1.9737298215558337e-16 -1.6524249668839539e-15 6.459479416000911e-16 2.9605947323337506e-16 -5.551115123125783e-16 0.0

In summary, even though there are some computed roots relatively far from their correct values, they are nevertheless the roots of a polynomial that is very close to the original.