A primary reason for our interest in matrices is their relationship to linear transformations. If we define f(x)=Ax, then for all vectors x, y, and scalars α,
These properties define a linear transformation. Moreover, every linear transformation between finite-dimensional vector spaces can be represented as a matrix-vector multiplication.
In Chapter 4 we solved the nonlinear rootfinding problem f(x)=0 with methods that needed only the ability to evaluate f at any known value of x. By repeatedly evaluating f at cleverly chosen points, these algorithms were able to return an estimate for f−1(0).
A close examination reveals that the power method and Krylov subspace methods have the same structure because the only appearance of the matrix A in them is to multiply a known vector, i.e., to evaluate f(x)=Ax. This is used to evaluate the inverse, A−1b.
Bringing these points of view together leads us to a cornerstone of modern scientific computation: matrix-free iterations. Krylov subspace methods can be used to invert a linear transformation if one provides code for the transformation, even if its associated matrix is not known explicitly.
In From matrix to insight we saw that a grayscale image can be represented as an m×n matrix X of pixel intensity values. Now consider a simple model for blurring the image. Define B as the m×m tridiagonal matrix
The product BX applies B to each column of X. Within that column it does a weighted average of the values of each pixel and its two neighbors. That has the effect of blurring the image vertically. We can increase the amount of blur by applying B repeatedly.
In order to blur horizontally, we can transpose the image and apply blurring in the same way. We need a blurring matrix defined as in (8.7.2) but with size n×n. We call this matrix C. Altogether the horizontal blurring is done by transposing, applying C, and transposing back to the original orientation. That is,
A more interesting operation is deblurring: given an image blurred by poor focus, can we reconstruct the true image? Conceptually, we want to invert the function blur(X).
It’s easy to see from (8.7.4) that the blur operation is a linear transformation on image matrices. But an m×n image matrix is equivalent to a length-mn vector—it’s just a matter of interpreting the shape of the same data. Let vec(X)=x and unvec(x)=X be the mathematical statements of such reshaping operations. Now say X is the original image and Z=blur(X) is the blurred one. Then by linearity there is some matrix A such that
The matrix A is mn×mn; for a 12-megapixel image, it would have 1.4×1014 entries! Admittedly, it is extremely sparse, but the point is that we don’t need it at all.
Instead, given any vector u we can compute v=Au through the steps
✍ Suppose that code for the linear transformation f(x)=Ax is given for an unknown matrix A. Explain carefully how one could construct A.
⌨ The matrix of the blur transformation happens to be symmetric and positive definite. Repeat Demo 8.7.2 using CG for the deblurring.
The condition number of the matrix of the blur transformation is related to the condition numbers of the single-dimension matrices Bk and Ck in (8.7.4).
(a) ⌨ Let m=50. Show that B has a Cholesky factorization and thus is SPD. Find κ(B). (Note: cond requires a regular dense matrix, not a sparse matrix.)
(b) ✍ Explain why part (a) implies κ(Bk)=κ(B)k.
(c) ✍ Explain two important effects of the limit k→∞ on deblurring by Krylov methods.
The cumulative summation function cumsum is defined as