An important aspect of MINRES and CG (and, by extension, GMRES) is that the convergence of a Krylov method can be expected to deteriorate as the condition number of the matrix increases. Even moderately large condition numbers can make the convergence impractically slow. Therefore it’s common for these methods to be used with a technique to reduce the relevant condition number.
More specifically, (8.8.1) is known as left preconditioning, but it is the simplest and most common type.
As usual, we do not want to actually compute M−1 for a given M. Instead, we have a linear system with the matrix M−1A. In a Krylov method, the operation “let v=Au” becomes a two-step process:
Set y=Au.
Solve Mv=y for v.
As an implementation detail, it is common to provide the Krylov solver with code that does step 2; if the matrix M is given, the default is to use sparse factorization.
There are competing objectives in the choice of M. On one hand, we want M−1A≈I in some sense because that makes (8.8.1) easy to solve by Krylov iteration. Hence M≈A. On the other hand, we desire that solving the system Mv=y be relatively fast.
One of the simplest choices for the preconditioner M is a diagonal matrix. This definitely meets the requirement of being fast to invert: the solution of Mv=y is just vi=yi/Mii. The only question is whether it can be chosen in such a way that M−1A is much more amenable to Krylov iterations than A is. This may be the case when the rows of A differ greatly in scale, or when A is diagonally dominant (see (2.9.1)).
Another general-purpose technique is the incomplete LU factorization. Since true factorization of a sparse matrix usually leads to an undesirable amount of fill-in, incomplete LU sacrifices exact factors by dropping elements smaller than an adjustable threshold.
In practice, good preconditioning is often as important, if not more important, than the specific choice of Krylov method. Effective preconditioning may require deep understanding of the underlying application, however, which limits our ability to go into further details. For instance, the linear system may be some approximation of a continuous mathematical model, and then M can be derived by using a cruder form of the approximation. Krylov methods offer a natural way to exploit these and other approximate inverses.
✍ Suppose M=RTR. Show that the eigenvalues of R−TAR−1 are the same as the eigenvalues of M−1A. (This observation underlies preconditioning variants for SPD matrices.)
⌨ The object returned by ilu stores the factors in a way that optimizes sparse triangular substitution. You can recover the factors themselves via
iLU = ilu(A,τ=0.1) # for example
L, U = I+iLU.L, iLU.U'
In this problem, use A = 1.5I + sprand(800,800,0.005).
(a) Using τ=0.3 for the factorization, plot the eigenvalues of A and of M−1A in the complex plane on side-by-side subplots. Do they support the notion that M−1A is “more like” an identity matrix than A is? (Hint: the matrices are small enough to convert to standard dense form for the use of eigvals.)
(b) Repeat part (a) for τ=0.03. Is M more accurate than in part (a), or less?
⌨ (Continuation of Exercise 8.5.5.) Let B be diagm(1:100), let I be I(100), and let Z be a 100×100 matrix of zeros. Define
and let b be a 200-vector of ones. The matrix A is difficult for GMRES.
(a) Design a diagonal preconditioner M, with all diagonal elements equal to 1 or -1, such that M−1A has all positive eigenvalues. Apply gmres without restarts using this preconditioner and a tolerance of 10-10 for 100 iterations. Plot the convergence curve.
(b) Now design another diagonal preconditioner such that all the eigenvalues of M−1A are 1, and apply preconditioned gmres again. How many iterations are apparently needed for convergence?
⌨ Let A = matrixdepot("Bai/rdb2048"), and let b be a vector of 2048 ones. In the steps below, use GMRES for up to 300 iterations without restarts and with a stopping tolerance of 10-4.
(a) Time the GMRES solution without preconditioning. Verify that convergence was achieved.
(b) Show that diagonal preconditioning is not helpful for this problem.
(c) To two digits, find a value of τ in iLU such that the preconditioned method transitions from effective and faster than part (a) to ineffective.