Skip to article frontmatterSkip to article content

Orthogonal polynomials

Interpolation is not the only way to use polynomials for global approximation of functions. In Fitting functions to data we saw how to find least-squares polynomial fits to data by solving linear least-squares matrix problems. This idea can be extended to fitting functions.

We can extend least-squares fitting from data to functions by extending several familiar finite-dimensional definitions. The continuous extension of a sum is an integral, which leads to the following.

9.4.1Quasimatrices

If we are extending our notion of vectors to include continuous functions, what should serve as an extension of a matrix? One of our most important interpretations of a matrix is the connection to linear combinations. For instance, the Vandermonde-like system Vcy\mathbf{V} \mathbf{c} \approx \mathbf{y} from (3.1.2) is the statement

yVc=c0v0+c1v1++cnvn,\mathbf{y} \approx \mathbf{V} \mathbf{c} = c_0 \mathbf{v}_0 + c_1 \mathbf{v}_1 + \cdots + c_n \mathbf{v}_n,

in which the iith row of vj\mathbf{v}_j is tijt_i^j. This was derived as a discrete approximation for j=0,,nj=0,\ldots,n.

yc0+c1x++cnxn,\mathbf{y} \approx c_0 + c_1 x + \cdots + c_n x^n,

which we want to abbreviate in “matrix”-vector form.

We consider any other expressions involving a quasimatrix to be undefined. It might help to think of F\mathbf{F} as an ×n\infty\times n matrix, which is consistent with the definitions that Fz\mathbf{F}\mathbf{z} is a function (×1\infty\times 1), FTg\mathbf{F}^T g is a vector (n×1n\times 1), and FTF\mathbf{F}^T\mathbf{F} is a matrix (n×nn \times n). When infinite dimensions combine in a product, we use integrals rather than sums.

9.4.2Normal equations

The discrete linear least-squares problem of minimizing yVc2\| \mathbf{y} - \mathbf{V} \mathbf{c} \|_2 over all possible c\mathbf{c}, given matrix V\mathbf{V} and data vector y\mathbf{y}, has a solution via the normal equations (3.2.3),

c=(VTV)1VTy.\mathbf{c} = \left(\mathbf{V}^T\mathbf{V}\right)^{-1} \mathbf{V}^T \mathbf{y}.

We can now reinterpret (9.4.11) in terms of quasimatrices.

There is no need to supply a proof of Theorem 9.4.1 because it will read exactly the same as for the discrete normal equations. All the effort has gone into making definitions that set up a perfect analogy. In retrospect, all we needed in the original discrete case were linear combinations and inner products.

9.4.3Legendre polynomials

Equation (9.4.13) becomes much simpler if VTV\mathbf{V}^T\mathbf{V} is diagonal. By our definitions, this would imply that the columns of V\mathbf{V} are mutually orthogonal in the sense of the function inner product. This is not the case for the monomial functions xjx^j. But there are orthogonal polynomials which do satisfy this property.

For what follows, let PnS\mathcal{P}_n \subset S be the set of polynomials of degree nn or less.

Here are some key facts that are straightforward to prove.

Now let us define the quasimatrix

Ln(x)=[α01P0α11P1αn1Pn]. \mathbf{L}_n(x) = \begin{bmatrix} \alpha_0^{-1} \underline{P_0} & \alpha_1^{-1} \underline{P_1} & \cdots & \alpha_{n}^{-1} \underline{P_{n}} \end{bmatrix}.

Then LnTLn=I\mathbf{L}_n^T\mathbf{L}_n=\mathbf{I}. The normal equations (9.4.13) thus simplify accordingly. Unraveling the definitions, we find the least-squares solution

Ln(LnTf)=k=0nckPk(x),where ck=1αk2Pk,f. \mathbf{L}_n \bigl( \mathbf{L}_n^T f \bigr) = \sum_{k=0}^n c_k P_k(x), \quad \text{where } c_k = \frac{1}{\alpha_k^2} \langle P_k,f \rangle.

9.4.4Chebyshev polynomials

Equation (9.4.1) is not the only useful way to define an inner product on a function space. It can be generalized to

f,g=11f(x)g(x)w(x)dx \langle f,g \rangle = \int_{-1}^1 f(x)g(x)w(x)\,dx

for a positive function w(x)w(x) called the weight function of the inner product. An important special case is

f,g=11f(x)g(x)1x2dx. \langle f,g \rangle = \int_{-1}^1 \frac{f(x)g(x)}{\sqrt{1-x^2}}\,dx.

Chebyshev polynomials also have a startling alternative form,

Tk(x)=cos(kθ),θ=arccos(x).T_k(x) = \cos\left( k \theta \right), \quad \theta = \arccos(x).

The results from Theorem 9.4.2 apply to Chebyshev polynomials as well, with orthogonality being in the sense of (9.4.23). Their Gram matrix is given by

Ti,Tj={0,ij,γ02=π,i=j=0,γi2=π/2,i=j>0.\langle T_i,T_j \rangle = \begin{cases} 0, & i\neq j, \\ \gamma_0^2 = \pi, & i=j=0, \\ \gamma_i^2=\pi/2, & i=j>0. \end{cases}

The least-squares solution is not the same in the Legendre and Chebyshev cases: both find the nearest approximation to a given f(x)f(x), but the norm used to measure distances is not the same.

9.4.5Roots of orthogonal polynomials

Interesting properties can be deduced entirely from the orthogonality conditions. The following result will be relevant in Spectrally accurate integration. The same result holds for orthogonal polynomial families with different weight functions, such as the Chebyshev polynomials.

The result of Theorem 9.4.3 holds for orthogonal families of polynomials for other weight functions. The Chebyshev case is unusual in that thanks to (9.4.25), the roots of TnT_n are known explicitly:

tk=cos(2k12nπ),k=1,,n. t_k = \cos\left(\frac{2k-1}{2n}\pi\right), \qquad k=1,\ldots,n.

These are known as the Chebyshev points of the first kind. The chief difference between first-kind and second-kind points is that the latter type include the endpoints ±1\pm 1. Both work well for polynomial interpolation and give spectral convergence.

9.4.6Least squares versus interpolation

Both interpolation and the solution of a linear least-squares problem produce a projection of a function into the space of polynomials Pn\mathcal{P}_n. In the least-squares case, the close connection with inner products and orthogonality makes the 2-norm, perhaps with a weight function, a natural setting for analysis. Because a constant weight function is the simplest choice, Legendre polynomials are commonly used for least squares.

Interpolation has no easy connection to inner products or the 2-norm. With interpolants a different kind of approximation analysis is more fruitful, often involving the complex plane, in which the max-norm is the natural choice. For reasons beyond the scope of this text, Chebyshev polynomials are typically the most convenient to work with in this context.

9.4.7Exercises

  1. ✍ Let F\mathbf{F} be the quasimatrix [1 ⁣cos(πx) ⁣sin(πx)]\bigl[\, \underline{1} \quad\! \underline{\cos(\pi x)}\quad\! \underline{\sin(\pi x)}\,\bigr] for x[1,1]x\in[-1,1].

    (a) Find FTex\mathbf{F}^T e^x.

    (b) Find FTF\mathbf{F}^T \mathbf{F}.

  2. (a) Find the best linear approximation in the least-squares sense to the function sin(x)\sin(x) on [1,1][-1,1].

    (b) Using Theorem 9.4.1, explain why the best fitting quadratic polynomial will be the linear function you found in part (a). (Note: You do not need to carry out the complete calculation.)

  3. (a) ✍ ⌨ Use (9.4.18) to write out P2(x)P_2(x) and P3(x)P_3(x). Plot P0,P1,P2,P3P_0,P_1,P_2,P_3 on one graph for 1x1-1\le x \le 1. (You may find it interesting to compare to the graph in Exercise 3.3.3.)

    (b) ✍ ⌨ Use (9.4.24) to write out T2(x)T_2(x) and T3(x)T_3(x). Plot T0,T1,T2,T3T_0,T_1,T_2,T_3 on one graph for 1x1-1\le x \le 1.

  4. ✍ Use (9.4.18) to show that Pn(x)P_n(x) is an odd function if nn is odd and an even function if nn is even.

  5. ⌨ Using (9.4.18), write a function legpoly(x,n) that returns a matrix whose columns are the Legendre polynomials P0,P1,,PnP_0,P_1,\ldots,P_n evaluated at all the points in the vector xx. Then use your function to plot P0,P1,P2,P3P_0,P_1,P_2,P_3 on one graph.

  6. ⌨ (Continuation of previous problem.) Choose 1600 evenly spaced points in [1,1][-1,1]. For n=1,2,,16n=1,2,\ldots,16, use this vector of points and the function legpoly to construct a 1600×(n+1)1600\times (n+1) matrix that discretizes the quasimatrix

    An=[P0P1Pn].\mathbf{A}_n = \begin{bmatrix} \underline{P_0} & \underline{P_1} & \cdots & \underline{P_{n}} \end{bmatrix}.

    Make a table of the matrix condition number κ(An)\kappa(\mathbf{A}_n) as a function of nn. (These will not be much larger than 1, showing that the Legendre polynomials are a good basis set.)

  7. ⌨ Using (9.4.25), write a function chebpoly that returns a matrix whose columns are the Chebyshev polynomials T0,T1,,TnT_0,T_1,\ldots,T_n evaluated at all the points in the vector xx. Then use your function to plot T0,T1,T2,T3T_0,T_1,T_2,T_3 on one graph.

  8. (a) ✍ Use (9.4.25) to show that the first-kind points (9.4.29) are roots of TnT_n.

    (b) ✍ Use (9.4.25) to show that the second-kind points (9.3.2) are local extreme points of TnT_n.

  9. ✍ Show that the definition (9.4.25) satisfies the recursion relation in (9.4.24).

  10. ✍ Use (9.4.25) to show that T0,T0=π\langle T_0,T_0 \rangle=\pi and Tk,Tk=π/2\langle T_k,T_k \rangle=\pi/2 for k>0k>0 in the Chebyshev-weighted inner product. (Hint: Change to the variable θ.)