As in the rest of this chapter, we will be using the 2-norm exclusively.
Equation (3.3.2) is the key to the computational attractiveness of orthogonality. Figure 3.3.1 shows how nonorthogonal vectors can allow a multidimensional version of subtractive cancellation, in which ∥x−y∥ is much smaller than ∥x∥ and ∥y∥. As the figure illustrates, orthogonal vectors do not allow this phenomenon. By (3.3.2), the magnitude of a vector difference or sum is larger than the magnitudes of the original vectors.
Statements about orthogonal vectors are often made more easily in matrix form. Let Q be an n×k matrix whose columns q1,…,qk are orthogonal vectors. The orthogonality conditions (3.3.1) become simply that QTQ is a diagonal matrix, since
If the columns of Q are orthonormal, then QTQ is the k×k identity matrix. This is such an important property that we will break with common practice here and give this type of matrix a name.
Now we come to another important way to factor a matrix: the QR factorization. As we will show below, the QR factorization plays a role in linear least squares analogous to the role of LU factorization in linear systems.
In most introductory books on linear algebra, the QR factorization is derived through a process known as Gram–Schmidt orthogonalization. However, while it is an important tool for theoretical work, the Gram–Schmidt process is numerically unstable. We will consider an alternative construction in Computing QR factorizations.
When m is much larger than n, which is often the case, there is a compressed form of the factorization that is more efficient. In the product
In order to have the normal equations be well posed, we require that A is not rank-deficient (as proved in Theorem 3.2.2). This is enough to guarantee that R^ is nonsingular (see Exercise 4). Therefore, its transpose is nonsingular as well, and we arrive at
This algorithm is implemented in Function 3.3.2. It is essentially the algorithm used internally by Julia when A\b is called. The execution time is dominated by the factorization, the most common method for which is described in Computing QR factorizations.
The solution of least-squares problems via QR factorization is more stable than when the normal equations are formulated and solved directly.
✍ Prove Theorem 3.3.2. For the third part, use the definition of the 2-norm as an induced matrix norm, then apply some of our other results as needed.
⌨ Let t0,…,tm be m+1 equally spaced points in [−1,1]. Let A be the matrix in (3.1.2) for m=400 and fitting by polynomials of degree less than 5. Find the thin QR factorization of A, and, on a single graph, plot every column of Q^ as a function of the vector t.
✍ Prove that if the m×n (m≥n) matrix A is not rank-deficient, then the factor R^ of the thin QR factorization is invertible. (Hint: Suppose on the contrary that R^ is singular. Show using the factored form of A that this would imply that A is rank-deficient.)
✍ Let A be m×n with m>n. Show that if A=QR is a QR factorization and R has rank n, then A+=R+QT.
✍ Let A be m×n with m>n. Show that if A=Q^R^ is a thin QR factorization and R^ is invertible, then A+=R^−1Q^T.
⌨ Repeat Exercise 3.1.2, but use thin QR factorization rather than the backslash to solve the least-squares problem.
✍ The matrix P=Q^Q^T derived from the thin QR factorization has some interesting and important properties.
(a) Prove that P=AA+.
(b) Prove that P2=P. (This property defines a projection matrix.)
(c) Any vector x may be written trivially as x=u+v, where u=Px and v=(I−P)x. Prove that u and v are orthogonal. (Together with part (b), this means that P is an orthogonal projector.)
Confusingly, a square matrix whose columns are orthogonal is not necessarily an orthogonal matrix; the columns must be orthonormal, which is a stricter condition.