Condition number bounds on error

using FundamentalsNumericalComputation

Julia has a function cond to compute matrix condition numbers. By default, the 2-norm is used. As an example, the family of Hilbert matrices is famously badly conditioned. Here is the \(7\times 7\) case.

A = [ 1/(i+j) for i=1:7, j=1:7 ]
7×7 Array{Float64,2}:
 0.5       0.333333  0.25      0.2        0.166667   0.142857   0.125
 0.333333  0.25      0.2       0.166667   0.142857   0.125      0.111111
 0.25      0.2       0.166667  0.142857   0.125      0.111111   0.1
 0.2       0.166667  0.142857  0.125      0.111111   0.1        0.0909091
 0.166667  0.142857  0.125     0.111111   0.1        0.0909091  0.0833333
 0.142857  0.125     0.111111  0.1        0.0909091  0.0833333  0.0769231
 0.125     0.111111  0.1       0.0909091  0.0833333  0.0769231  0.0714286
kappa = cond(A)
1.6978363217187743e9

Next we engineer a linear system problem to which we know the exact answer.

x_exact = 1.:7.
b = A*x_exact
7-element Array{Float64,1}:
 5.282142857142857
 4.342063492063492
 3.7130952380952382
 3.2538239538239537
 2.900613275613275
 2.619197469197469
 2.389063714063714

Now we perturb the data randomly with a vector of norm \(10^{-12}\).

dA = randn(size(A));  dA = 1e-12*(dA/norm(dA));
db = randn(size(b));  db = 1e-12*(db/norm(db));

We solve the perturbed problem using built-in pivoted LU and see how the solution was changed.

x = (A+dA) \ (b+db); 
dx = x - x_exact;

Here is the relative error in the solution.

rel_error = norm(dx) / norm(x_exact)
1.5190931801953373e-5

And here are upper bounds predicted using the condition number of the original matrix.

@show b_bound = kappa * 1e-12/norm(b);
@show A_bound = kappa * 1e-12/norm(A);
b_bound = (kappa * 1.0e-12) / norm(b) = 0.00017690558703123912
A_bound = (kappa * 1.0e-12) / norm(A) = 0.0014425077292288535

Even if we don’t make any manual perturbations to the data, machine epsilon does when we solve the linear system numerically.

x = A\b;
@show rel_error = norm(x - x_exact) / norm(x_exact);
@show rounding_bound = kappa*eps();
rel_error = norm(x - x_exact) / norm(x_exact) = 3.6591498138119105e-8
rounding_bound = kappa * eps() = 3.769953952834136e-7

Because \(\kappa\approx 10^8\), it’s possible to lose 8 digits of accuracy in the process of passing from \(A\) and \(b\) to \(x\). That’s independent of the algorithm; it’s inevitable once the data are expressed in double precision.

Larger Hilbert matrices are even more poorly conditioned.

A = [ 1/(i+j) for i=1:14, j=1:14 ];
kappa = cond(A)
2.9943121605195904e17

Before we compute the solution, note that \(\kappa\) exceeds 1/eps(). In principle we therefore might end up with an answer that is completely wrong (i.e., a relative error greater than 100%).

rounding_bound = kappa*eps()
66.48708607047894
x_exact = 1.:14.;
b = A*x_exact;  
x = A\b;

We got an answer. But in fact the error does exceed 100%.

relative_error = norm(x_exact - x) / norm(x_exact)
10.847226987239631