Interpolation is not the only way to use polynomials for global approximation of functions. In Fitting functions to data we saw how to find least-squares polynomial fits to data by solving linear least-squares matrix problems. This idea can be extended to fitting functions.
We can extend least-squares fitting from data to functions by extending several familiar finite-dimensional definitions. The continuous extension of a sum is an integral, which leads to the following.
If we are extending our notion of vectors to include continuous functions, what should serve as an extension of a matrix? One of our most important interpretations of a matrix is the connection to linear combinations. For instance, the Vandermonde-like system Vc≈y from (3.1.2) is the statement
which we want to abbreviate in “matrix”-vector form.
We consider any other expressions involving a quasimatrix to be undefined. It might help to think of F as an ∞×n matrix, which is consistent with the definitions that Fz is a function (∞×1), FTg is a vector (n×1), and FTF is a matrix (n×n). When infinite dimensions combine in a product, we use integrals rather than sums.
The discrete linear least-squares problem of minimizing ∥y−Vc∥2 over all possible c, given matrix V and data vector y, has a solution via the normal equations (3.2.3),
We can now reinterpret (9.4.11) in terms of quasimatrices.
There is no need to supply a proof of Theorem 9.4.1 because it will read exactly the same as for the discrete normal equations. All the effort has gone into making definitions that set up a perfect analogy. In retrospect, all we needed in the original discrete case were linear combinations and inner products.
Equation (9.4.13) becomes much simpler if VTV is diagonal. By our definitions, this would imply that the columns of V are mutually orthogonal in the sense of the function inner product. This is not the case for the monomial functions xj. But there are orthogonal polynomials which do satisfy this property.
For what follows, let Pn⊂S be the set of polynomials of degree n or less.
Here are some key facts that are straightforward to prove.
The results from Theorem 9.4.2 apply to Chebyshev polynomials as well, with orthogonality being in the sense of (9.4.23). Their Gram matrix is given by
The least-squares solution is not the same in the Legendre and Chebyshev cases: both find the nearest approximation to a given f(x), but the norm used to measure distances is not the same.
Interesting properties can be deduced entirely from the orthogonality conditions. The following result will be relevant in Spectrally accurate integration. The same result holds for orthogonal polynomial families with different weight functions, such as the Chebyshev polynomials.
The result of Theorem 9.4.3 holds for orthogonal families of polynomials for other weight functions. The Chebyshev case is unusual in that thanks to (9.4.25), the roots of Tn are known explicitly:
These are known as the Chebyshev points of the first kind. The chief difference between first-kind and second-kind points is that the latter type include the endpoints ±1. Both work well for polynomial interpolation and give spectral convergence.
Both interpolation and the solution of a linear least-squares problem produce a projection of a function into the space of polynomials Pn. In the least-squares case, the close connection with inner products and orthogonality makes the 2-norm, perhaps with a weight function, a natural setting for analysis. Because a constant weight function is the simplest choice, Legendre polynomials are commonly used for least squares.
Interpolation has no easy connection to inner products or the 2-norm. With interpolants a different kind of approximation analysis is more fruitful, often involving the complex plane, in which the max-norm is the natural choice. For reasons beyond the scope of this text, Chebyshev polynomials are typically the most convenient to work with in this context.
✍ Let F be the quasimatrix [1cos(πx)sin(πx)] for x∈[−1,1].
(a) Find FTex.
(b) Find FTF.
✍ (a) Find the best linear approximation in the least-squares sense to the function sin(x) on [−1,1].
(b) Using Theorem 9.4.1, explain why the best fitting quadratic polynomial will be the linear function you found in part (a). (Note: You do not need to carry out the complete calculation.)
(a) ✍ ⌨ Use (9.4.18) to write out P2(x) and P3(x). Plot P0,P1,P2,P3 on one graph for −1≤x≤1. (You may find it interesting to compare to the graph in Exercise 3.3.3.)
(b) ✍ ⌨ Use (9.4.24) to write out T2(x) and T3(x). Plot T0,T1,T2,T3 on one graph for −1≤x≤1.
✍ Use (9.4.18) to show that Pn(x) is an odd function if n is odd and an even function if n is even.
⌨ Using (9.4.18), write a function legpoly(x,n) that returns a matrix whose columns are the Legendre polynomials P0,P1,…,Pn evaluated at all the points in the vector x. Then use your function to plot P0,P1,P2,P3 on one graph.
⌨ (Continuation of previous problem.) Choose 1600 evenly spaced points in [−1,1]. For n=1,2,…,16, use this vector of points and the function legpoly to construct a 1600×(n+1) matrix that discretizes the quasimatrix
Make a table of the matrix condition number κ(An) as a function of n. (These will not be much larger than 1, showing that the Legendre polynomials are a good basis set.)
⌨ Using (9.4.25), write a function chebpoly that returns a matrix whose columns are the Chebyshev polynomials T0,T1,…,Tn evaluated at all the points in the vector x. Then use your function to plot T0,T1,T2,T3 on one graph.
(a) ✍ Use (9.4.25) to show that the first-kind points (9.4.29) are roots of Tn.
(b) ✍ Use (9.4.25) to show that the second-kind points (9.3.2) are local extreme points of Tn.
✍ Show that the definition (9.4.25) satisfies the recursion relation in (9.4.24).
✍ Use (9.4.25) to show that ⟨T0,T0⟩=π and ⟨Tk,Tk⟩=π/2 for k>0 in the Chebyshev-weighted inner product. (Hint: Change to the variable θ.)