Except for some unimportant technicalities, the eigenvectors of A∗A, when appropriately ordered and normalized, are right singular vectors of A. The left singular vectors could then be deduced from the identity AV=US.
Another close connection between EVD and SVD comes via the (m+n)×(m+n) matrix
If σ is a singular value of B, then σ and −σ are eigenvalues of C, and the associated eigenvector immediately reveals a left and a right singular vector (see Exercise 11). This connection is implicitly exploited by software to compute the SVD.
In words, each right singular vector is mapped by A to a scaled version of its corresponding left singular vector; the magnitude of scaling is its singular value.
Both the SVD and the EVD describe a matrix in terms of some special vectors and a small number of scalars. Table 7.3.1 summarizes the key differences. The SVD sacrifices having the same basis in both source and image spaces—after all, they may not even have the same dimension—but as a result gains orthogonality in both spaces.
in which S^ is square and diagonal and U^ is ONC but not square.
The thin form retains all the information about A from the SVD; the factorization is still an equality, not an approximation. It is computationally preferable when m≫n, since it requires far less storage than a full SVD. For a matrix with more columns than rows, one can derive a thin form by taking the adjoint of the thin SVD of A∗.
The SVD is intimately connected to the 2-norm, as the following theorem describes.
The conclusion svdnorm can be proved by vector calculus. In the square case m=n, A having full rank is identical to being invertible. The SVD is the usual means for computing the 2-norm and condition number of a matrix.
✍ Each factorization below is algebraically correct. The notation In means an n×n identity. In each case, determine whether it is an SVD. If it is, write down σ1, u1, and v1. If it is not, state all of the ways in which it fails the required properties.
✍ Apply Theorem 7.3.2 to find an SVD of A=⎣⎡100−1001−1⎦⎤.
⌨ Let x be a vector of 1000 equally spaced points between 0 and 1. Suppose An is the 1000×n matrix whose (i,j) entry is xij−1 for j=1,…,n.
(a) Print out the singular values of A1, A2, and A3.
(b) Make a log-linear plot of the singular values of A40.
(c) Repeat part (b) after converting the elements of x to type Float32 (i.e., single precision).
(d) Having seen the plot for part (c), which singular values in part (b) do you suspect may be incorrect?
⌨ See Demo 7.1.5 for how to get the “mandrill” test image. Make a log-linear scatter plot of the singular values of the matrix of grayscale intensity values. (The shape of this graph is surprisingly similar across a wide range of images.)
✍ Prove that for a square real matrix A, ∥A∥2=∥AT∥2.
✍ Prove svdcond of Theorem 7.3.3, given that svdnorm is true. (Hint: If the SVD of A is known, what is the SVD of A+?)
✍ Suppose A∈Rm×n, with m>n, has the thin SVD A=U^S^VT. Show that the matrix AA+ is equal to U^U^T. (You must be careful with matrix sizes in this derivation.)
✍ In (3.2.6) we defined the 2-norm condition number of a rectangular matrix as κ(A)=∥A∥⋅∥A+∥, and then claimed (in the real case) that κ(A∗A)=κ(A)2. Prove this assertion using the SVD.
✍ Show that the square of each singular value of A is an eigenvalue of the matrix AA∗ for any m×n matrix A. (You should consider the cases m>n and m≤n separately.)
✍ In this problem you will see how svdnorm is proved in the real case.
(a) Use the technique of Lagrange multipliers to show that among vectors that satisfy ∥x∥22=1, any vector that maximizes ∥Ax∥22 must be an eigenvector of ATA. It will help to know that if B is any symmetric matrix, the gradient of the scalar function xTBx with respect to x is 2Bx.
(b) Use the result of part (a) to prove svdnorm for real matrices.