Skip to article frontmatterSkip to article content

Chapter 2

Functions

Forward substitution
forwardsub.jl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
"""
    forwardsub(L,b)

Solve the lower-triangular linear system with matrix `L` and
right-hand side vector `b`.
"""
function forwardsub(L, b)
    n = size(L, 1)
    x = zeros(n)
    x[1] = b[1] / L[1, 1]
    for i in 2:n
        s = sum(L[i, j] * x[j] for j in 1:i-1)
        x[i] = (b[i] - s) / L[i, i]
    end
    return x
end
Backward substitution
backsub.jl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
"""
    backsub(U,b)

Solve the upper-triangular linear system with matrix `U` and
right-hand side vector `b`.
"""
function backsub(U, b)
    n = size(U, 1)
    x = zeros(n)
    x[n] = b[n] / U[n, n]
    for i in n-1:-1:1
        s = sum(U[i, j] * x[j] for j in i+1:n)
        x[i] = (b[i] - s) / U[i, i]
    end
    return x
end
LU factorization (not stable)
lufact.jl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
"""
    lufact(A)

Compute the LU factorization of square matrix `A`, returning the
factors.
"""
function lufact(A)
    n = size(A, 1)        # detect the dimensions from the input
    L = diagm(ones(n))   # ones on main diagonal, zeros elsewhere
    U = zeros(n, n)
    Aₖ = float(copy(A))  # make a working copy

    # Reduction by outer products
    for k in 1:n-1
        U[k, :] = Aₖ[k, :]
        L[:, k] = Aₖ[:, k] / U[k, k]
        Aₖ -= L[:, k] * U[k, :]'
    end
    U[n, n] = Aₖ[n, n]
    return LowerTriangular(L), UpperTriangular(U)
end
LU factorization with partial pivoting
plufact.jl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
"""
    plufact(A)

Compute the PLU factorization of square matrix `A`, returning the
triangular factors and a row permutation vector.
"""
function plufact(A)
    n = size(A, 1)
    L = zeros(n, n)
    U = zeros(n, n)
    p = fill(0, n)
    Aₖ = float(copy(A))

    # Reduction by outer products
    for k in 1:n
        p[k] = argmax(abs.(Aₖ[:, k]))    # best pivot in column k
        U[k, :] = Aₖ[p[k], :]
        L[:, k] = Aₖ[:, k] / U[k, k]
        if k < n    # no update needed on last iteration
            Aₖ -= L[:, k] * U[k, :]'
        end
    end
    return LowerTriangular(L[p, :]), U, p
end

Examples

2.1 Polynomial interpolation

Example 2.1.1

We create two vectors for data about the population of China. The first has the years of census data and the other has the population, in millions of people.

year = [1982, 2000, 2010, 2015]; 
pop = [1008.18, 1262.64, 1337.82, 1374.62];

It’s convenient to measure time in years since 1980. We use .- to subtract a scalar from every element of a vector. We will also use a floating-point value in the subtraction, so the result is also in double precision.

t = year .- 1980.0
y = pop;

Now we have four data points (t1,y1),,(t4,y4)(t_1,y_1),\dots,(t_4,y_4), so n=4n=4 and we seek an interpolating cubic polynomial. We construct the associated Vandermonde matrix:

V = [ t[i]^j for i in 1:4, j in 0:3 ]
4×4 Matrix{Float64}: 1.0 2.0 4.0 8.0 1.0 20.0 400.0 8000.0 1.0 30.0 900.0 27000.0 1.0 35.0 1225.0 42875.0

To solve for the vector of polynomial coefficients, we use a backslash to solve the linear system:

c = V \ y
4-element Vector{Float64}: 962.2387878787875 24.127754689754774 -0.5922620490620537 0.00684386724386731

The algorithms used by the backslash operator are the main topic of this chapter. As a check on the solution, we can compute the residual.

y - V * c
4-element Vector{Float64}: 0.0 0.0 2.2737367544323206e-13 0.0

Using floating-point arithmetic, it is not realistic to expect exact equality of quantities; a relative difference comparable to ϵmach\macheps is all we can look for.

By our definitions, the elements of c are coefficients in ascending-degree order for the interpolating polynomial. We can use the polynomial to estimate the population of China in 2005:

using Polynomials
p = Polynomial(c)    # construct a polynomial
p(2005-1980)         # include the 1980 time shift
1302.2043001443

The official population value for 2005 was 1303.72, so our result is rather good.

We can visualize the interpolation process. First, we plot the data as points.

using Plots
scatter(t, y;
    label="actual",  legend=:topleft,
    xlabel="years since 1980",  ylabel="population (millions)", 
    title="Population of China")
Loading...

We want to superimpose a plot of the polynomial. We do that by evaluating it at a vector of points in the interval. The dot after the name of the polynomial is a universal way to apply a function to every element of an array, a technique known as broadcasting.

The range function constructs evenly spaced values given the endpoints and either the number of values, or the step size between them.

Adding a dot to the end of a function name causes it to be broadcast, i.e., applied to every element of a vector or matrix.

# Choose 500 times in the interval [0,35].
tt = range(0, 35, 500)

# Evaluate the polynomial at all the vector components.
yy = p.(tt)

foreach(println, yy[1:4])
962.2387878787875
963.9282039963299
965.6118068288089
967.2896105457506

Now we use plot! to add to the current plot, rather than replacing it.

plot!(tt, yy, label="interpolant")
Loading...

2.2 Computing with matrices

Example 2.2.1

In Julia, vectors and matrices are one-dimensional and two-dimensional arrays, respectively. Square brackets are used to enclose elements of a matrix or vector. Use spaces for horizontal concatenation, and semicolons or new lines to indicate vertical concatenation.

A = [ 1 2 3 4 5; 50 40 30 20 10
    π sqrt(2) exp(1) (1+sqrt(5))/2 log(3) ]
3×5 Matrix{Float64}: 1.0 2.0 3.0 4.0 5.0 50.0 40.0 30.0 20.0 10.0 3.14159 1.41421 2.71828 1.61803 1.09861
m, n = size(A)
(3, 5)

A vector is not quite the same thing as a matrix: it has only one dimension, not two. Separate its elements by commas or semicolons:

x = [ 3, 3, 0, 1, 0 ]
size(x)
(5,)

For some purposes, however, an nn-vector in Julia is treated like having a column shape. Note the difference if we use spaces instead of commas inside the brackets:

y = [ 3 3 0 1 0 ]
size(y)
(1, 5)

This 1×51\times 5 matrix is not equivalent to a vector.

Concatenated elements within brackets may be matrices or vectors for a block representation, as long as all the block sizes are compatible.

[ x  x ]
5×2 Matrix{Int64}: 3 3 3 3 0 0 1 1 0 0
[ x; x ]
10-element Vector{Int64}: 3 3 0 1 0 3 3 0 1 0

The zeros and ones functions construct matrices with entries all zero or one, respectively.

B = [ zeros(3, 2) ones(3, 1) ]
3×3 Matrix{Float64}: 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0 1.0

A single quote ' after a matrix returns its adjoint. For real matrices, this is the transpose; for complex-valued matrices, the elements are also conjugated.

A'
5×3 adjoint(::Matrix{Float64}) with eltype Float64: 1.0 50.0 3.14159 2.0 40.0 1.41421 3.0 30.0 2.71828 4.0 20.0 1.61803 5.0 10.0 1.09861

If x is simply a vector, then its transpose has a row shape.

x'
1×5 adjoint(::Vector{Int64}) with eltype Int64: 3 3 0 1 0

There are many convenient shorthand ways of building vectors and matrices other than entering all of their entries directly or in a loop. To get a range with evenly spaced entries between two endpoints, you have two options. One is to use a colon :.

y = 1:4              # start:stop
1:4
z = 0:3:12           # start:step:stop
0:3:12

(Ranges are not strictly considered vectors, but they behave identically in most circumstances.) Instead of specifying the step size, you can give the number of points in the range if you use range.

s = range(-1, 1, 5)
-1.0:0.5:1.0

Accessing an element is done by giving one (for a vector) or two (for a matrix) index values within square brackets.

a = A[2, end-1]
20.0
x[2]
3

The indices can be vectors or ranges, in which case a block of the matrix is accessed.

A[1:2, end-2:end]    # first two rows, last three columns
2×3 Matrix{Float64}: 3.0 4.0 5.0 30.0 20.0 10.0

If a dimension has only the index : (a colon), then it refers to all the entries in that dimension of the matrix.

A[:, 1:2:end]        # all of the odd columns
3×3 Matrix{Float64}: 1.0 3.0 5.0 50.0 30.0 10.0 3.14159 2.71828 1.09861

The matrix and vector senses of addition, subtraction, scalar multiplication, multiplication, and power are all handled by the usual symbols.

B = diagm( [-1, 0, -5] )   # create a diagonal matrix
3×3 Matrix{Int64}: -1 0 0 0 0 0 0 0 -5
@show size(A), size(B);
BA = B * A     # matrix product
(size(A), size(B)) = ((3, 5), (3, 3))

3×5 Matrix{Float64}: -1.0 -2.0 -3.0 -4.0 -5.0 0.0 0.0 0.0 0.0 0.0 -15.708 -7.07107 -13.5914 -8.09017 -5.49306

A * B causes an error here, because the dimensions aren’t compatible.

A * B    # throws an error
DimensionMismatch: matrix A has axes (Base.OneTo(3),Base.OneTo(5)), matrix B has axes (Base.OneTo(3),Base.OneTo(3))

Stacktrace:
 [1] _generic_matmatmul!(C::Matrix{Float64}, A::Matrix{Float64}, B::Matrix{Int64}, _add::LinearAlgebra.MulAddMul{true, true, Bool, Bool})
   @ LinearAlgebra ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/matmul.jl:883
 [2] generic_matmatmul!
   @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/matmul.jl:868 [inlined]
 [3] _mul!
   @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/matmul.jl:287 [inlined]
 [4] mul!
   @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/matmul.jl:285 [inlined]
 [5] mul!
   @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/matmul.jl:253 [inlined]
 [6] *(A::Matrix{Float64}, B::Matrix{Int64})
   @ LinearAlgebra ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/matmul.jl:114
 [7] top-level scope
   @ In[29]:1

A square matrix raised to an integer power is the same as repeated matrix multiplication.

B^3    # same as B*B*B
3×3 Matrix{Int64}: -1 0 0 0 0 0 0 0 -125

Sometimes one instead wants to treat a matrix or vector as a mere array and simply apply a single operation to each element of it. For multiplication, division, and power, the corresponding operators start with a dot.

C = -A;

Because both matrices are 3×53\times 5, A * C would be an error here, but elementwise operations are fine.

elementwise = A .* C
3×5 Matrix{Float64}: -1.0 -4.0 -9.0 -16.0 -25.0 -2500.0 -1600.0 -900.0 -400.0 -100.0 -9.8696 -2.0 -7.38906 -2.61803 -1.20695

The two operands of a dot operator have to have the same size—unless one is a scalar, in which case it is expanded or broadcast to be the same size as the other operand.

x_to_two = x .^ 2
5-element Vector{Int64}: 9 9 0 1 0
two_to_x = 2 .^ x
5-element Vector{Int64}: 8 8 1 2 1

Most of the mathematical functions, such as cos, sin, log, exp, and sqrt, expect scalars as operands. However, you can broadcast any function, including ones that you have defined, across a vector or array by using a special dot syntax.

show(cos.(π * x))    # broadcast to a function
[-1.0, -1.0, 1.0, -1.0, 1.0]

Rather than dotting multiple individual functions, you can use @. before an expression to broadcast everything within it.

show(@. cospi( (x + 1)^3) )    # broadcast an entire expression
[1.0, 1.0, -1.0, 1.0, -1.0]

2.3 Linear systems

Example 2.3.2

For a square matrix A\mathbf{A}, the syntax A \ b is mathematically equivalent to A1b\mathbf{A}^{-1} \mathbf{b}.

A = [1 0 -1; 2 2 1; -1 -3 0]
3×3 Matrix{Int64}: 1 0 -1 2 2 1 -1 -3 0
b = [1, 2, 3]
3-element Vector{Int64}: 1 2 3
x = A \ b
3-element Vector{Float64}: 2.1428571428571432 -1.7142857142857144 1.1428571428571428

One way to check the answer is to compute a quantity known as the residual. It is (ideally) close to machine precision (relative to the elements in the data).

residual = b - A * x
3-element Vector{Float64}: -4.440892098500626e-16 -4.440892098500626e-16 0.0

If the matrix A\mathbf{A} is singular, you may get an error.

A = [0 1; 0 0]
b = [1, -1]
x = A \ b    # throws an error
SingularException(2)

Stacktrace:
 [1] generic_trimatdiv!(C::Vector{Float64}, uploc::Char, isunitc::Char, tfun::typeof(identity), A::Matrix{Int64}, B::Vector{Int64})
   @ LinearAlgebra ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/triangular.jl:1388
 [2] _ldiv!
   @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/triangular.jl:966 [inlined]
 [3] ldiv!
   @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/triangular.jl:959 [inlined]
 [4] \
   @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/triangular.jl:1721 [inlined]
 [5] \(A::Matrix{Int64}, B::Vector{Int64})
   @ LinearAlgebra ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/generic.jl:1130
 [6] top-level scope
   @ In[41]:3

In this case we can check that the rank of A\mathbf{A} is less than its number of columns, indicating singularity.

rank(A)
1

A linear system with a singular matrix might have no solution or infinitely many solutions, but in either case, backslash will fail. Moreover, detecting singularity is a lot like checking whether two floating-point numbers are exactly equal: because of roundoff, it could be missed. In Conditioning of linear systems we’ll find a robust way to fully describe this situation.

Example 2.3.3

It’s easy to get just the lower triangular part of any matrix using the tril function.

A = rand(1.:9., 5, 5)
L = tril(A)
5×5 Matrix{Float64}: 3.0 0.0 0.0 0.0 0.0 2.0 2.0 0.0 0.0 0.0 6.0 6.0 6.0 0.0 0.0 3.0 2.0 4.0 9.0 0.0 8.0 4.0 7.0 6.0 4.0

We’ll set up and solve a linear system with this matrix.

b = ones(5)
x = FNC.forwardsub(L,b)
5-element Vector{Float64}: 0.3333333333333333 0.16666666666666669 -0.3333333333333333 0.11111111111111109 -0.16666666666666663

It’s not clear how accurate this answer is. However, the residual should be zero or comparable to ϵmach\macheps.

b - L * x
5-element Vector{Float64}: 0.0 0.0 0.0 0.0 2.220446049250313e-16

Next we’ll engineer a problem to which we know the exact answer. Use \alpha Tab and \beta Tab to get the Greek letters.

α = 0.3;
β = 2.2;
U = diagm( 0=>ones(5), 1=>[-1, -1, -1, -1] )
U[1, [4, 5]] = [ α - β, β ]
U
5×5 Matrix{Float64}: 1.0 -1.0 0.0 -1.9 2.2 0.0 1.0 -1.0 0.0 0.0 0.0 0.0 1.0 -1.0 0.0 0.0 0.0 0.0 1.0 -1.0 0.0 0.0 0.0 0.0 1.0
x_exact = ones(5)
b = [α, 0, 0, 0, 1]
5-element Vector{Float64}: 0.3 0.0 0.0 0.0 1.0

Now we use backward substitution to solve for x\mathbf{x}, and compare to the exact solution we know already.

x = FNC.backsub(U,b)
err = x - x_exact
5-element Vector{Float64}: 2.220446049250313e-16 0.0 0.0 0.0 0.0

Everything seems OK here. But another example, with a different value for β, is more troubling.

α = 0.3;
β = 1e12;
U = diagm( 0=>ones(5), 1=>[-1, -1, -1, -1] )
U[1, [4, 5]] = [ α - β, β ]
b = [α, 0, 0, 0, 1]

x = FNC.backsub(U,b)
err = x - x_exact
5-element Vector{Float64}: -4.882812499995559e-5 0.0 0.0 0.0 0.0

It’s not so good to get 4 digits of accuracy after starting with 16! The source of the error is not hard to track down. Solving for x1x_1 performs (αβ)+β(\alpha-\beta)+\beta in the first row. Since α|\alpha| is so much smaller than β|\beta|, this a recipe for losing digits to subtractive cancellation.

2.4 LU factorization

Example 2.4.2

We explore the outer product formula for two random triangular matrices.

L = tril( rand(1:9, 3, 3) )
3×3 Matrix{Int64}: 5 0 0 3 6 0 8 7 8
U = triu( rand(1:9, 3, 3) )
3×3 Matrix{Int64}: 4 3 1 0 7 6 0 0 7

Here are the three outer products in the sum in (2.4.4):

L[:, 1] * U[1, :]'
3×3 Matrix{Int64}: 20 15 5 12 9 3 32 24 8
L[:, 2] * U[2, :]'
3×3 Matrix{Int64}: 0 0 0 0 42 36 0 49 42
L[:, 3] * U[3, :]'
3×3 Matrix{Int64}: 0 0 0 0 0 0 0 0 56

Simply because of the triangular zero structures, only the first outer product contributes to the first row and first column of the entire product.

Example 2.4.3

For illustration, we work on a 4×44 \times 4 matrix. We name it with a subscript in preparation for what comes.

A₁ = [
     2    0    4     3 
    -4    5   -7   -10 
     1   15    2   -4.5
    -2    0    2   -13
    ];
L = diagm(ones(4))
U = zeros(4, 4);

Now we appeal to (2.4.5). Since L11=1L_{11}=1, we see that the first row of U\mathbf{U} is just the first row of A1\mathbf{A}_1.

U[1, :] = A₁[1, :]
U
4×4 Matrix{Float64}: 2.0 0.0 4.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

From (2.4.6), we see that we can find the first column of L\mathbf{L} from the first column of A1\mathbf{A}_1.

L[:, 1] = A₁[:, 1] / U[1, 1]
L
4×4 Matrix{Float64}: 1.0 0.0 0.0 0.0 -2.0 1.0 0.0 0.0 0.5 0.0 1.0 0.0 -1.0 0.0 0.0 1.0

We have obtained the first term in the sum (2.4.4) for LU\mathbf{L}\mathbf{U}, and we subtract it away from A1\mathbf{A}_1.

A₂ = A₁ - L[:, 1] * U[1, :]'
4×4 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 5.0 1.0 -4.0 0.0 15.0 0.0 -6.0 0.0 0.0 6.0 -10.0

Now A2=2u2T+3u3T+4u4T.\mathbf{A}_2 = \boldsymbol{\ell}_2\mathbf{u}_2^T + \boldsymbol{\ell}_3\mathbf{u}_3^T + \boldsymbol{\ell}_4\mathbf{u}_4^T. If we ignore the first row and first column of the matrices in this equation, then in what remains we are in the same situation as at the start. Specifically, only 2u2T\boldsymbol{\ell}_2\mathbf{u}_2^T has any effect on the second row and column, so we can deduce them now.

U[2, :] = A₂[2, :]
L[:, 2] = A₂[:, 2] / U[2, 2]
L
4×4 Matrix{Float64}: 1.0 0.0 0.0 0.0 -2.0 1.0 0.0 0.0 0.5 3.0 1.0 0.0 -1.0 0.0 0.0 1.0

If we subtract off the latest outer product, we have a matrix that is zero in the first two rows and columns.

A₃ = A₂ - L[:, 2] * U[2, :]'
4×4 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -3.0 6.0 0.0 0.0 6.0 -10.0

Now we can deal with the lower right 2×22\times 2 submatrix of the remainder in a similar fashion.

U[3, :] = A₃[3, :]
L[:, 3] = A₃[:, 3] / U[3, 3]
A₄ = A₃ - L[:, 3] * U[3, :]'
4×4 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0

Finally, we pick up the last unknown in the factors.

U[4, 4] = A₄[4, 4];

We now have all of L\mathbf{L},

L
4×4 Matrix{Float64}: 1.0 0.0 -0.0 0.0 -2.0 1.0 -0.0 0.0 0.5 3.0 1.0 0.0 -1.0 0.0 -2.0 1.0

and all of U\mathbf{U},

U
4×4 Matrix{Float64}: 2.0 0.0 4.0 3.0 0.0 5.0 1.0 -4.0 0.0 0.0 -3.0 6.0 0.0 0.0 0.0 2.0

We can verify that we have a correct factorization of the original matrix by computing the backward error:

A₁ - L * U
4×4 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

IIn floating point, we cannot expect the difference to be exactly zero as we found in this toy example. Instead, we would be satisfied to see that each element of the difference above is comparable in size to machine precision.

Example 2.4.4

Here are the data for a linear system Ax=b\mathbf{A}\mathbf{x}=\mathbf{b}.

A = [2 0 4 3; -4 5 -7 -10; 1 15 2 -4.5; -2 0 2 -13];
b = [4,9,9,4];

We apply Function 2.4.1 and then do two triangular solves.

L, U = FNC.lufact(A)
z = FNC.forwardsub(L, b)
x = FNC.backsub(U, z)
4-element Vector{Float64}: 192.66666666666666 -15.533333333333335 -65.33333333333333 -40.0

A check on the residual assures us that we found the solution.

b - A*x
4-element Vector{Float64}: 0.0 -5.684341886080802e-14 2.842170943040401e-14 0.0

2.5 Efficiency of matrix computations

Example 2.5.4

Here is a straightforward implementation of matrix-vector multiplication.

n = 6
A = randn(n, n)
x = rand(n)
y = zeros(n)
for i in 1:n
    for j in 1:n
        y[i] += A[i, j] * x[j]    # 1 multiply, 1 add
    end
end

Each of the loops implies a summation of flops. The total flop count for this algorithm is

i=1nj=1n2=i=1n2n=2n2.\sum_{i=1}^n \sum_{j=1}^n 2 = \sum_{i=1}^n 2n = 2n^2.

Since the matrix A\mathbf{A} has n2n^2 elements, all of which have to be involved in the product, it seems unlikely that we could get a flop count that is smaller than O(n2)O(n^2) in general.

Let’s run an experiment with the built-in matrix-vector multiplication. Note that Julia is unusual in that loops have a variable scope separate from its enclosing code. Thus, for n in n below means that inside the loop, the name n will take on each one of the values that were previously assigned to the vector n.

n = 1000:1000:5000
t = []
for n in n
    A = randn(n, n)  
    x = randn(n)
    time = @elapsed for j in 1:80; A * x; end
    push!(t, time)
end

The reason for doing multiple repetitions at each value of nn in the loop above is to avoid having times so short that the resolution of the timer is significant.

@pt :header = ["size", "time (sec.)"] [n t]
Loading...

Looking at the timings just for n=2000n=2000 and n=4000n=4000, they have ratio

@show t[n.==4000] ./ t[n.==2000];
t[n .== 4000] ./ t[n .== 2000] = [1.4117809950868896]

If the run time is dominated by flops, then we expect this ratio to be

2(4000)22(2000)2=4.\frac{2(4000)^2}{2(2000)^2}=4.
Example 2.5.5

Let’s repeat the experiment of the previous figure for more, and larger, values of nn.

randn(5,5)*randn(5);  # throwaway to force compilation

n = 400:200:6000
t = []
for n in n
    A = randn(n, n)  
    x = randn(n)
    time = @elapsed for j in 1:50; A * x; end
    push!(t, time)
end

Plotting the time as a function of nn on log-log scales is equivalent to plotting the logs of the variables.

scatter(n, t, label="data", legend=false,
    xaxis=(:log10, L"n"), yaxis=(:log10, "elapsed time (sec)"),
    title="Timing of matrix-vector multiplications")
Loading...

You can see that while the full story is complicated, the graph is trending to a straight line of positive slope. For comparison, we can plot a line that represents O(n2)O(n^2) growth exactly. (All such lines have slope equal to 2.)

plot!(n, t[end] * (n/n[end]).^2, l=:dash,
    label=L"O(n^2)", legend=:topleft)
Loading...
Example 2.5.6

We’ll test the conclusion of O(n3)O(n^3) flops experimentally, using the built-in lu function instead of the purely instructive lufact.

lu(randn(3, 3));   # throwaway to force compilation

n = 400:400:4000
t = []
for n in n
    A = randn(n, n)  
    time = @elapsed for j in 1:12; lu(A); end
    push!(t, time)
end

We plot the timings on a log-log graph and compare it to O(n3)O(n^3). The result could vary significantly from machine to machine, but in theory the data should start to parallel the line as nn\to\infty.

scatter(n, t, label="data", legend=:topleft,
    xaxis=(:log10, L"n"), yaxis=(:log10, "elapsed time"))
plot!(n, t[end ]* (n/n[end]).^3, l=:dash, label=L"O(n^3)")
Loading...

2.6 Row pivoting

Example 2.6.1

Here is a previously encountered matrix that factors well.

A = [2 0 4 3 ; -4 5 -7 -10 ; 1 15 2 -4.5 ; -2 0 2 -13];
L, U = FNC.lufact(A)
L
4×4 LowerTriangular{Float64, Matrix{Float64}}: 1.0 ⋅ ⋅ ⋅ -2.0 1.0 ⋅ ⋅ 0.5 3.0 1.0 ⋅ -1.0 0.0 -2.0 1.0

If we swap the second and fourth rows of A\mathbf{A}, the result is still nonsingular. However, the factorization now fails.

A[[2, 4], :] = A[[4, 2], :]  
L, U = FNC.lufact(A)
L
4×4 LowerTriangular{Float64, Matrix{Float64}}: 1.0 ⋅ ⋅ ⋅ -1.0 NaN ⋅ ⋅ 0.5 Inf NaN ⋅ -2.0 Inf NaN 1.0

The presence of NaN in the result indicates that some impossible operation was required. The source of the problem is easy to locate. We can find the first outer product in the factorization just fine:

U[1, :] = A[1, :]
L[:, 1] = A[:, 1] / U[1, 1]
A -= L[:, 1] * U[1, :]'
4×4 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 0.0 6.0 -10.0 0.0 15.0 0.0 -6.0 0.0 5.0 1.0 -4.0

The next step is U[2, :] = A[2, :], which is also OK. But then we are supposed to divide by U[2, 2], which is zero. The algorithm cannot continue.

Example 2.6.2

Here is the trouble-making matrix from Demo 2.6.1.

A₁ = [2 0 4 3 ; -2 0 2 -13; 1 15 2 -4.5 ; -4 5 -7 -10]
4×4 Matrix{Float64}: 2.0 0.0 4.0 3.0 -2.0 0.0 2.0 -13.0 1.0 15.0 2.0 -4.5 -4.0 5.0 -7.0 -10.0

We now find the largest candidate pivot in the first column. We don’t care about sign, so we take absolute values before finding the max.

i = argmax( abs.(A₁[:, 1]) )
4

This is the row of the matrix that we extract to put into U\mathbf{U}. That guarantees that the division used to find 1\boldsymbol{\ell}_1 will be valid.

L, U = zeros(4,4),zeros(4,4)
U[1, :] = A₁[i, :]
L[:, 1] = A₁[:, 1] / U[1, 1]
A₂ = A₁ - L[:, 1] * U[1, :]'
4×4 Matrix{Float64}: 0.0 2.5 0.5 -2.0 0.0 -2.5 5.5 -8.0 0.0 16.25 0.25 -7.0 0.0 0.0 0.0 0.0

Observe that A2\mathbf{A}_2 has a new zero row and zero column, but the zero row is the fourth rather than the first. However, we forge on by using the largest possible pivot in column 2 for the next outer product.

@show i = argmax( abs.(A₂[:, 2]) ) 
U[2, :] = A₂[i, :]
L[:, 2] = A₂[:, 2] / U[2, 2]
A₃ = A₂ - L[:, 2] * U[2, :]'
i = argmax(abs.(A₂[:, 2])) = 3
4×4 Matrix{Float64}: 0.0 0.0 0.461538 -0.923077 0.0 0.0 5.53846 -9.07692 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Now we have zeroed out the third row as well as the second column. We can finish out the procedure.

@show i = argmax( abs.(A₃[:, 3]) ) 
U[3, :] = A₃[i, :]
L[:, 3] = A₃[:, 3] / U[3, 3]
A₄ = A₃ - L[:, 3] * U[3, :]'
i = argmax(abs.(A₃[:, 3])) = 2
4×4 Matrix{Float64}: 0.0 0.0 0.0 -0.166667 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
@show i = argmax( abs.(A₄[:, 4]) ) 
U[4, :] = A₄[i, :]
L[:, 4] = A₄[:, 4] / U[4, 4];
i = argmax(abs.(A₄[:, 4])) = 1

We do have a factorization of the original matrix:

A₁ - L * U
4×4 Matrix{Float64}: 0.0 -1.38778e-16 0.0 0.0 0.0 1.38778e-16 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

And U\mathbf{U} has the required structure:

U
4×4 Matrix{Float64}: -4.0 5.0 -7.0 -10.0 0.0 16.25 0.25 -7.0 0.0 0.0 5.53846 -9.07692 0.0 0.0 0.0 -0.166667

However, the triangularity of L\mathbf{L} has been broken.

L
4×4 Matrix{Float64}: -0.5 0.153846 0.0833333 1.0 0.5 -0.153846 1.0 -0.0 -0.25 1.0 0.0 -0.0 1.0 0.0 0.0 -0.0
Example 2.6.3

Here again is the matrix from Demo 2.6.2.

A = [2 0 4 3 ; -2 0 2 -13; 1 15 2 -4.5 ; -4 5 -7 -10]
4×4 Matrix{Float64}: 2.0 0.0 4.0 3.0 -2.0 0.0 2.0 -13.0 1.0 15.0 2.0 -4.5 -4.0 5.0 -7.0 -10.0

As the factorization proceeded, the pivots were selected from rows 4, 3, 2, and finally 1. If we were to put the rows of A\mathbf{A} into that order, then the algorithm would run exactly like the plain LU factorization from LU factorization.

B = A[[4, 3, 2, 1], :]
L, U = FNC.lufact(B);

We obtain the same U\mathbf{U} as before:

U
4×4 UpperTriangular{Float64, Matrix{Float64}}: -4.0 5.0 -7.0 -10.0 ⋅ 16.25 0.25 -7.0 ⋅ ⋅ 5.53846 -9.07692 ⋅ ⋅ ⋅ -0.166667

And L\mathbf{L} has the same rows as before, but arranged into triangular order:

L
4×4 LowerTriangular{Float64, Matrix{Float64}}: 1.0 ⋅ ⋅ ⋅ -0.25 1.0 ⋅ ⋅ 0.5 -0.153846 1.0 ⋅ -0.5 0.153846 0.0833333 1.0
Example 2.6.4

The third output of plufact is the permutation vector we need to apply to A\mathbf{A}.

A = rand(1:20, 4, 4)
L, U, p = FNC.plufact(A)
A[p,:] - L * U   # should be ≈ 0
4×4 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.55271e-15 0.0 0.0 -8.88178e-16 1.77636e-15 3.55271e-15

Given a vector b\mathbf{b}, we solve Ax=b\mathbf{A}\mathbf{x}=\mathbf{b} by first permuting the entries of b\mathbf{b} and then proceeding as before.

b = rand(4)
z = FNC.forwardsub(L,b[p])
x = FNC.backsub(U,z)
4-element Vector{Float64}: 0.01399443922210889 0.014538620453601336 -0.014792442049713922 0.03240571682877231

A residual check is successful:

b - A*x
4-element Vector{Float64}: 0.0 0.0 0.0 0.0
Example 2.6.5

With the syntax A \ b, the matrix A is PLU-factored, followed by two triangular solves.

A = randn(500, 500)   # 500x500 with normal random entries
A \ rand(500)          # force compilation
@elapsed for k=1:50; A \ rand(500); end
1.552101792

In Efficiency of matrix computations we showed that the factorization is by far the most costly part of the solution process. A factorization object allows us to do that costly step only once per unique matrix.

factored = lu(A)     # store factorization result
factored \ rand(500)   # force compilation
@elapsed for k=1:50; factored \ rand(500); end
0.006267875
Example 2.6.6

We construct a linear system for this matrix with ϵ=1012\epsilon=10^{-12} and exact solution [1,1][1,1]:

ϵ = 1e-12
A = [-ϵ 1; 1 -1]
b = A * [1, 1]
2-element Vector{Float64}: 0.999999999999 0.0

We can factor the matrix without pivoting and solve for x\mathbf{x}.

L, U = FNC.lufact(A)
x = FNC.backsub( U, FNC.forwardsub(L, b) )
2-element Vector{Float64}: 0.9999778782798785 1.0

Note that we have obtained only about 5 accurate digits for x1x_1. We could make the result even more inaccurate by making ε even smaller:

ϵ = 1e-20; A = [-ϵ 1; 1 -1]
b = A * [1, 1]
L, U = FNC.lufact(A)
x = FNC.backsub( U, FNC.forwardsub(L, b) )
2-element Vector{Float64}: -0.0 1.0

This effect is not due to ill conditioning of the problem—a solution with PLU factorization works perfectly:

A \ b
2-element Vector{Float64}: 1.0 1.0

2.7 Vector and matrix norms

Example 2.7.1

In Julia the LinearAlgebra package has a norm function for vector norms.

x = [2, -3, 1, -1]
twonorm = norm(x)         # or norm(x,2)
3.872983346207417
infnorm = norm(x, Inf)
3.0
onenorm = norm(x, 1)
7.0

There is also a normalize function that divides a vector by its norm, making it a unit vector.

normalize(x, Inf)
4-element Vector{Float64}: 0.6666666666666666 -1.0 0.3333333333333333 -0.3333333333333333
Example 2.7.2
A = [ 2 0; 1 -1 ]
2×2 Matrix{Int64}: 2 0 1 -1

In Julia, one uses norm for vector norms and for the Frobenius norm of a matrix, which is like stacking the matrix into a single vector before taking the 2-norm.

Fronorm = norm(A)
2.449489742783178

Most of the time we want to use opnorm, which is an induced matrix norm. The default is the 2-norm.

twonorm = opnorm(A)
2.2882456112707374

You can get the 1-norm as well.

onenorm = opnorm(A, 1)
3.0

According to (2.7.15), the matrix 1-norm is equivalent to the maximum of the sums down the columns (in absolute value).

# Sum down the rows (1st matrix dimension):
maximum( sum(abs.(A), dims=1) )
3

Similarly, we can get the -norm and check our formula for it.

infnorm = opnorm(A, Inf)
2.0
 # Sum across columns (2nd matrix dimension):
maximum( sum(abs.(A), dims=2) )
2

Next we illustrate a geometric interpretation of the 2-norm. First, we will sample a lot of vectors on the unit circle in R2\mathbb{R}^2.

# Construct 601 unit column vectors.
θ = 2π * (0:1/600:1)   # type \theta then Tab
x = [ fun(t) for fun in [cos, sin], t in θ ];

To create an array of plots, start with a plot that has a layout argument, then do subsequent plot! calls with a subplot argument.

plot(aspect_ratio=1, layout=(1, 2),
    xlabel=L"x_1",  ylabel=L"x_2")
plot!(x[1, :], x[2, :], subplot=1, title="Unit circle")
Loading...

The linear function f(x)=Ax\mathbf{f}(\mathbf{x}) = \mathbf{A}\mathbf{x} defines a mapping from R2\mathbb{R}^2 to R2\mathbb{R}^2. We can apply A to every column of x by using a single matrix multiplication.

Ax = A * x;

The image of the transformed vectors is an ellipse.

plot!(Ax[1, :], Ax[2, :], 
    subplot=2, title="Image under x → Ax")
Loading...

That ellipse just touches the circle of radius A2\|\mathbf{A}\|_2.

plot!(twonorm*x[1, :], twonorm*x[2, :], subplot=2, l=:dash)
Loading...

2.8 Conditioning of linear systems

Example 2.8.1

Julia has a function cond to compute matrix condition numbers. By default, the 2-norm is used. As an example, the family of Hilbert matrices is famously badly conditioned. Here is the 6×66\times 6 case.

A = [ 1 / (i + j) for i in 1:6, j in 1:6 ]
κ = cond(A)
5.109816297946132e7

Because κ108\kappa\approx 10^8, it’s possible to lose nearly 8 digits of accuracy in the process of passing from A\mathbf{A} and b\mathbf{b} to x\mathbf{x}. That fact is independent of the algorithm; it’s inevitable once the data are expressed in finite precision.

Let’s engineer a linear system problem to observe the effect of a perturbation. We will make sure we know the exact answer.

x = 1:6
b = A * x
6-element Vector{Float64}: 4.407142857142857 3.564285714285714 3.013095238095238 2.6174603174603175 2.317279942279942 2.0807359307359308

Now we perturb the system matrix and vector randomly by 10-10 in norm.

# type \Delta then Tab to get Δ
ΔA = randn(size(A));  ΔA = 1e-10 * (ΔA / opnorm(ΔA));
Δb = randn(size(b));  Δb = 1e-10 * normalize(Δb);

We solve the perturbed problem using pivoted LU and see how the solution was changed.

new_x = ((A + ΔA) \ (b + Δb))
Δx = new_x - x
6-element Vector{Float64}: 1.221536697793013e-6 -6.139316571562858e-6 -2.0607307068765124e-5 0.0001307569174295864 -0.0001931264817560674 8.83661063690866e-5

Here is the relative error in the solution.

@show relative_error = norm(Δx) / norm(x);
relative_error = norm(Δx) / norm(x) = 2.624224063874282e-5

And here are upper bounds predicted using the condition number of the original matrix.

println("Upper bound due to b: $(κ * norm(Δb) / norm(b))")
println("Upper bound due to A: $(κ * opnorm(ΔA) / opnorm(A))")
Upper bound due to b: 0.0006723667714371329
Upper bound due to A: 0.004566989873939067

Even if we didn’t make any manual perturbations to the data, machine roundoff does so at the relative level of ϵmach\macheps.

Δx = A\b - x
@show relative_error = norm(Δx) / norm(x);
@show rounding_bound = κ * eps();
relative_error = norm(Δx) / norm(x) = 7.822650774976615e-10
rounding_bound = κ * eps() = 1.134607141116935e-8

Larger Hilbert matrices are even more poorly conditioned:

A = [ 1 / (i + j) for i=1:14, j=1:14 ];
κ = cond(A)
5.802584125151949e17

Note that κ exceeds 1/ϵmach1/\macheps. In principle we therefore may end up with an answer that has relative error greater than 100%.

rounding_bound = κ*eps()
128.8432499613623

Let’s put that prediction to the test.

x = 1:14
b = A * x  
Δx = A\b - x
@show relative_error = norm(Δx) / norm(x);
relative_error = norm(Δx) / norm(x) = 4.469466154206132

As anticipated, the solution has zero accurate digits in the 2-norm.

2.9 Exploiting matrix structure

Example 2.9.1

Here is a small tridiagonal matrix. Note that there is one fewer element on the super- and subdiagonals than on the main diagonal.

A = diagm( -1 => [4, 3, 2, 1, 0], 
    0 => [2, 2, 0, 2, 1, 2], 
    1 => fill(-1, 5) )
6×6 Matrix{Int64}: 2 -1 0 0 0 0 4 2 -1 0 0 0 0 3 0 -1 0 0 0 0 2 2 -1 0 0 0 0 1 1 -1 0 0 0 0 0 2

We can extract the elements on any diagonal using the diag function. The main or central diagonal is numbered zero, above and to the right of that is positive, and below and to the left is negative.

@show diag_main = diag(A);
@show diag_minusone = diag(A, -1);
diag_main = diag(A) = [2, 2, 0, 2, 1, 2]

diag_minusone = diag(A, -1) = [4, 3, 2, 1, 0]

The lower and upper bandwidths of A\mathbf{A} are repeated in the factors from the unpivoted LU factorization.

L, U = FNC.lufact(A)
L
6×6 LowerTriangular{Float64, Matrix{Float64}}: 1.0 ⋅ ⋅ ⋅ ⋅ ⋅ 2.0 1.0 ⋅ ⋅ ⋅ ⋅ 0.0 0.75 1.0 ⋅ ⋅ ⋅ 0.0 0.0 2.66667 1.0 ⋅ ⋅ 0.0 0.0 0.0 0.214286 1.0 ⋅ 0.0 0.0 0.0 0.0 0.0 1.0
U
6×6 UpperTriangular{Float64, Matrix{Float64}}: 2.0 -1.0 0.0 0.0 0.0 0.0 ⋅ 4.0 -1.0 0.0 0.0 0.0 ⋅ ⋅ 0.75 -1.0 0.0 0.0 ⋅ ⋅ ⋅ 4.66667 -1.0 0.0 ⋅ ⋅ ⋅ ⋅ 1.21429 -1.0 ⋅ ⋅ ⋅ ⋅ ⋅ 2.0
Example 2.9.2

We begin with a symmetric A\mathbf{A}.

A₁ = [  2     4     4     2
        4     5     8    -5
        4     8     6     2
        2    -5     2   -26 ];

We won’t use pivoting, so the pivot element is at position (1,1). This will become the first element on the diagonal of D\mathbf{D}. Then we divide by that pivot to get the first column of L\mathbf{L}.

L = diagm(ones(4))
d = zeros(4)
d[1] = A₁[1, 1]
L[:, 1] = A₁[:, 1] / d[1]
A₂ = A₁ - d[1] * L[:, 1] * L[:, 1]'
4×4 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 -3.0 0.0 -9.0 0.0 0.0 -2.0 -2.0 0.0 -9.0 -2.0 -28.0

We are now set up the same way for the submatrix in rows and columns 2–4.

d[2] = A₂[2, 2]
L[:, 2] = A₂[:, 2] / d[2]
A₃ = A₂ - d[2] * L[:, 2] * L[:, 2]'
4×4 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -2.0 -2.0 0.0 0.0 -2.0 -1.0

We continue working our way down the diagonal.

d[3] = A₃[3, 3]
L[:, 3] = A₃[:, 3] / d[3]
A₄ = A₃ - d[3] * L[:, 3] * L[:, 3]'
d[4] = A₄[4, 4]
@show d;
L
d = [2.0, -3.0, -2.0, 1.0]
4×4 Matrix{Float64}: 1.0 -0.0 -0.0 0.0 2.0 1.0 -0.0 0.0 2.0 -0.0 1.0 0.0 1.0 3.0 1.0 1.0

We have arrived at the desired factorization, which we can validate:

opnorm(A₁ - (L * diagm(d) * L'))
0.0
Example 2.9.3

A randomly chosen matrix is extremely unlikely to be symmetric. However, there is a simple way to symmetrize one.

A = rand(1.0:9.0, 4, 4)
B = A + A'
4×4 Matrix{Float64}: 2.0 7.0 16.0 8.0 7.0 14.0 13.0 6.0 16.0 13.0 2.0 9.0 8.0 6.0 9.0 14.0

Similarly, a random symmetric matrix is unlikely to be positive definite. The Cholesky algorithm always detects a non-PD matrix by quitting with an error.

cholesky(B)    # throws an error
PosDefException: matrix is not positive definite; Factorization failed.

Stacktrace:
  [1] checkpositivedefinite
    @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/factorization.jl:68 [inlined]
  [2] #cholesky!#163
    @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/cholesky.jl:269 [inlined]
  [3] cholesky!
    @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/cholesky.jl:267 [inlined]
  [4] #cholesky!#164
    @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/cholesky.jl:301 [inlined]
  [5] cholesky! (repeats 2 times)
    @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/cholesky.jl:295 [inlined]
  [6] _cholesky
    @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/cholesky.jl:411 [inlined]
  [7] cholesky(A::Matrix{Float64}, ::NoPivot; check::Bool)
    @ LinearAlgebra ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/cholesky.jl:401
  [8] cholesky
    @ ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/cholesky.jl:401 [inlined]
  [9] cholesky(A::Matrix{Float64})
    @ LinearAlgebra ~/.julia/juliaup/julia-1.11.2+0.aarch64.apple.darwin14/share/julia/stdlib/v1.11/LinearAlgebra/src/cholesky.jl:401
 [10] top-level scope
    @ In[139]:1

It’s not hard to manufacture an SPD matrix to try out the Cholesky factorization.

B = A' * A
cf = cholesky(B)
Cholesky{Float64, Matrix{Float64}} U factor: 4×4 UpperTriangular{Float64, Matrix{Float64}}: 7.93725 8.44121 4.91354 9.5751 ⋅ 6.53804 9.86898 3.39163 ⋅ ⋅ 4.29656 4.53397 ⋅ ⋅ ⋅ 1.50247

What’s returned is a factorization object. Another step is required to extract the factor as a matrix:

R = cf.U
4×4 UpperTriangular{Float64, Matrix{Float64}}: 7.93725 8.44121 4.91354 9.5751 ⋅ 6.53804 9.86898 3.39163 ⋅ ⋅ 4.29656 4.53397 ⋅ ⋅ ⋅ 1.50247

Here we validate the factorization:

opnorm(R' * R - B) / opnorm(B)
5.3773436509106825e-17