Skip to main content

Section 6.5 Legendre Polynomials and Interpolation

In the last section, we saw the Chebyshev polynomials and when using the roots of them as interpolation nodes that the \(\ell_{\infty}\) norm was minimized. In this section, we learn another important set of polynomials, the Legendre polynomials. When these are used, the \(\ell_{2}\) norm will be minimized.

Definition 6.5.1.

The \(n\)th Legendre Polynomial, \(P_{n}(x)\) is defined as
\begin{equation} \begin{aligned}P_{0}(x)\amp \equiv 1, \notag \\ P_{1}(x)\amp \equiv x, \notag \\ P_{n}(x)\amp = \frac{2n-1}{n}x P_{n-1}(x) - \frac{n-1}{n}P_{n-2}(x), \qquad \text{for $n \geq 2$.} \end{aligned}\tag{6.5.1} \end{equation}
(6.5.1) is called the Legendre Polynomial Recurrence Formula, which defines a polynomial in terms of the previous two Legendre polynomials.
As discussed at the beginning of this chapter, some sets of functions are orthogonal. In particular, the Legendre polynomials fit this case. The following lemma and theorem shows this.

Proof.

First, we will show that the inner product of two different Legendre polynomials is 0. This will be done by induction. Note that
\begin{equation*} \begin{aligned}\langle P_{0}, P_{1} \rangle\amp = \int_{-1}^{1} x \cdot 1 \, dx = 0\end{aligned} \end{equation*}
Then we will show using the recurrence relation that
\begin{equation*} \begin{aligned}\langle P_{j}, P_{k} \rangle = \int_{-1}^{1} \biggl(\frac{2j-1}{j}x P_{j-1}(x) - \frac{j-1}{j}P_{j-2}(x)\biggr) \biggl(\frac{2k-1}{n}x P_{k-1}(x) - \frac{k-1}{k}P_{k-2}(x)\biggr) \, dx\end{aligned} \end{equation*}
Need a proof for this.

Proof.

From Lemma 6.5.2, polynomial \(P_{j}\) and \(P_{k}\) are orthogonal is \(j \neq k\text{,}\) thus the set \(\{ P_{j}\}\) form an orthogonal set of functions.

Subsection 6.5.1 Approximating functions using Legendre Polynomials

Let’s say that we wish to find a polynomial, \(q(x)\text{,}\) that best fits the function \(f(x)\) on an interval (let’s say, \([-1,1]\text{.}\)) Let
\begin{equation*} \begin{aligned}q(x)\amp = c_{0} P_{0}(x) + c_{1} P_{1}(x) + \cdots + c_{n} P_{n}(x)\end{aligned} \end{equation*}
or
\begin{equation*} \begin{aligned}f(x) \approx q(x)\amp = \sum_{i=0}^{n} c_{i} P_{i}(x)\end{aligned} \end{equation*}
Multiply this equation by \(P_{j}(x)\) and integrate on \([-1,1]\text{.}\)
\begin{equation*} \begin{aligned}\int_{-1}^{1} f(x) P_{j}(x) \, dx\amp = \sum_{i=0}^{n} \int_{-1}^{1} P_{i}(x) P_{j}(x) \, dx\end{aligned} \end{equation*}
From Lemma 6.5.2, the only term on the right that is non zero is when \(i=j\text{,}\) so
\begin{equation*} \int_{-1}^{1} f(x) P_{j}(x) \, dx = c_{j} \int_{-1}^{1} (P_{j}(x))^{2} \, dx \end{equation*}
and solving for \(c_{j}\) and using the same Lemma again above, we get:
\begin{equation} \begin{aligned}c_{j}\amp = \frac{2j+1}{2}\int_{-1}^{1} f(x) P_{j}(x) \, dx\end{aligned}\tag{6.5.2} \end{equation}

Example 6.5.4.

Find the interpolating polynomial \(q_{4}(x)\) to \(f(x)=e^{-x}\) using Legendre polynomials.
Solution.
In this case, we need to evaluate the integrals:
\begin{equation*} c_{j} = \frac{2j+1}{2}\int_{-1}^{1} q(x) P_{j}(x) \, dx \end{equation*}
for \(j=0,1,2,3,4\text{,}\) and we doing this we get:
\begin{equation*} \begin{aligned}c_{0}\amp =\frac{1}{2}(e-e^{-1})\\ c_{1}\amp = -3e^{-1}\\ c_{2}\amp = \frac{5}{2}(e-7e^{-1})\\ c_{3}\amp = \frac{7}{2}(5e-37e^{-1}) \\\ c_{4}\amp = \frac{5}{2}(36e-266e^{-1}) \\\end{aligned} \end{equation*}
and the interpolating polynomial is
\begin{equation*} \begin{aligned}q_{4}(x)\amp = \sum_{i=0}^{4} c_{j} P_{j}(x)\end{aligned} \end{equation*}
which can be written as:
\begin{equation*} \begin{aligned}q_{4}(x)\amp = \frac{5}{8e}\biggl( 96\,{e^{2}}-705+306\,x-966\,{e^{2}}{x}^{2}+ 7140\,{x}^{2}\\\amp \qquad +70\,{e^{2}}{x}^{3}-42\,x{e^{2}}-518\,{x}^{3}+1134\,{e^{2}}{x}^{4}-8379\,{x}^{4}\biggr)\end{aligned} \end{equation*}
A plot of \(|q_{4}(x)-f(x)|\) shows the result:
NEED PLOT
So it appears that this approximation is accurate to about 3 decimal places. Again, notice that the error pattern is similar to what we have seen before in that in the center of the interval, the error is small and the largest error is on the endpoints.
The following lemma can be quite helpful as we will see below.

Proof.

The polynomial \(Q(x)\) can be written as
\begin{equation*} Q(x) = \sum_{j=0}^{n-1} c_j P_j(x) \end{equation*}
using the \(c_j\) in (6.5.2).
Then
\begin{equation*} \begin{aligned} \int_{-1}^1 P_n(x) Q(x) \,dx \amp = \int_{-1}^1 P_n(x) \sum_{j=0}^{n-1} c_j P_j(x) \, dx \\ \amp = \sum_{j=0}^{n-1} c_j \int_{-1}^1 P_n(x) P_j (x) \,dx = 0 \end{aligned} \end{equation*}
because each of the integrals above are 0 from Lemma 6.5.2.

Subsection 6.5.2 Gaussian Quadrature Revisited

In Section 5.4, we mentioned that the quadrature points or nodes of the formula were the roots of the Legendre polynomials. This is explained now that we have better background in approximation theory.
Our desire is to find
\begin{equation*} I(f) = \int_{-1}^1 f(x) \, dx \end{equation*}
where we return to the interval \([-1,1]\text{,}\) where the Legendre Polynomials are defined. Gaussian Quadrature require that we evaluate \(I(f)\) exactly for all polynomials up to some degree. We will show that if \(F(x)\) be a polynomial of degree \(2n-1\text{...}\)
From Subsection 1.1.5, the quotient-remainder theorem of polynomials can be applied as
\begin{equation} F(x) = P_n(x) Q(x) + R(x)\tag{6.5.3} \end{equation}
where \(P_n\) is the Legendre polynomial in (6.5.1), a degree \(n\) polynomial and \(Q(x)\) and \(R(x)\) are degree \(n-1\) polynomials. Using (6.5.3),
\begin{equation*} \int_{-1}^1 F(x) \, dx = \int_{-1}^1 P_n(x) Q(x) \, dx + \int_{-1}^1 R(x) \, dx \end{equation*}
and because the first integral on the right side is 0 from Lemma 6.5.5,
\begin{equation*} \int_{-1}^1 F(x) \, dx = \int_{-1}^1 R(x) \, dx \end{equation*}
Recall that quadrature is to write
\begin{equation*} I(f) = \sum_{k=1}^n w_k f(x_k) \end{equation*}
Examining (6.5.3) at a node \(x_k\)
\begin{equation*} F(x_k) = P_n(x_k) Q(x_k) + R(x_k) \end{equation*}
If we use \(x_k\) to be the roots of \(P_n(x)\text{,}\) then \(P(x_k)=0\) and
\begin{equation*} F(x_k) = R(x_k). \end{equation*}
The implication of this is that at a node \(x_k\text{,}\) the \(2n-1\) degree polynomial \(F(x)\) has the same function value as the \(n-1\) degree polynomial \(R(x)\text{.}\)

Subsection 6.5.3 Chebyshev Polynomials

As discussed in Section 6.1, we covered inner product spaces with inner products. The first example of this is the Chebyshev polynomials that we saw in Section 6.4. The next theorem shows that these polynomials are orthogonal.

Proof.

Let \(T_{i}\) and \(T_{j}\) be the \(i\)th and \(j\)th Chebyshev polynomial defined in (6.4.1).
\begin{equation*} \begin{aligned}\langle T_{i}, T_{j} \rangle_{w}\amp = \int_{-1}^{1} T_{i} (x) T_{j} (x) \frac{1}{\sqrt{1-x^{2}}}\, dx \\ \amp \qquad \text{using the defintion of the Chebyshev polynomial}\\ \amp = \int_{-1}^{1} \cos (i \cos^{-1}x) \cos (j \cos^{-1}x) \frac{1}{\sqrt{1-x^{2}}}\, dx\\ \amp \text{Change variables by leting $x=\cos \theta$} \\ \amp = \int_{\pi}^{0} \cos (i \cos^{-1}\cos \theta) \cos (j \cos^{-1}\cos \theta) \frac{1}{\sqrt{1-\cos^{2} \theta}}\cdot (-\sin \theta) \, d\theta \\ \amp = \int_{0}^{\pi}\cos i \theta \cos j \theta \, d\theta = \begin{cases}0 \amp \text{if $i \neq j$} \\ \frac{\pi}{2} \amp \text{if $i=j$}\end{cases}\end{aligned} \end{equation*}
where the last step is a standard integral from calculus. This means that the Chebyshev polynomials form an set of orthogonal polynomials.
Now that we have established that the Chebyshev polynomials are orthogonal, we can use this to do interpolation.
Let \(P_{n}(x)\) be a polynomial that we desire to approximate a function \(f(x)\) on \([a,b]\) and \(w(x)=1/\sqrt{1-x^{2}}\text{,}\) then
\begin{equation} P_{n}(x)= \sum_{i=0}^{n} \frac{\langle f,T_{i} \rangle_{w}}{\langle T_{i}, T_{i} \rangle_{w}}T_{i}(x)\tag{6.5.4} \end{equation}

Example 6.5.7.

Use the formula above to find \(P_{3}(x)\) and Chebyshev Polynomials on \([-1,1]\) and \(f(x)=e^{-x}\text{.}\)
First, we calculate
\begin{equation*} \begin{aligned}d_{i}\amp = \frac{\langle f,T_{i} \rangle}{\langle T_{i}, T_{i} \rangle}\\\amp = \frac{\int_{-1}^{1} (1-x^{2})^{-1/2}f(x) T_{i}(x) \, dx }{\int_{-1}^{1} (1-x^{2})^{-1/2}T_{i}(x)^{2} \, dx }\\\end{aligned} \end{equation*}
for \(i=0,1,2,3\text{.}\) We get:
\begin{equation*} \begin{aligned}d_{0}\amp = 1.2660658780\\ d_{1}\amp = -1.1303182080\\ d_{2}\amp = 0.2714953396\\ d_{3}\amp = -0.0443368498\end{aligned} \end{equation*}
And thus,
\begin{equation*} \begin{aligned}P_{3}(x)\amp = d_{0} T_{0}(x) + d_{1} T_{1}(x) + d_{2} T_{2}(x) + d_{3} T_{3}(x) \\\amp = 1.266065 -1.1303 x +0.271495 (2x^{2}-1) -0.044337 (4x^{3}-3x) \\\amp = .9945705384-.9973076585x+.5429906792x^{2}-.1773473994x^{3}\end{aligned} \end{equation*}
which is the same result we arrived at in Example 6.4.9. It should be emphasized that the result is the same despite using either interpolation on the Chebyshev points or finding the minimizing polynomial using the appropriate norm.

Subsection 6.5.4 Summary of Polynomial Interpolation

This is a summary of polynomial interpolation and all of the techniques that we saw in both this Chapter ch:interpolation and Chapter ch:approx. In short, we wish to find \(P_{n}(x)\) that passes through \((x_{i},f(x_{i}))\) for \(i=0,1,2, \ldots, n\text{.}\) In some cases the \(x_{i}\) are selected. In others, they are given.
Lagrange Form
The Lagrange Form from Section 4.2 is
\begin{equation*} L_{n,k}(x) = \prod_{\stackrel{i=0,}{i\neq k}}^{n}\frac{x-x_{i}}{x_{k}-x_{i}} \end{equation*}
and the polynomial that passes through the \(n+1\) points \((x_{0},f(x_{0})), (x_{1}, f(x_{1})), \ldots, (x_{n}, f(x_{n}))\) is
\begin{equation*} P_{n}(x) = \sum_{k=0}^{n} f(x_{n}) L_{n,k} \end{equation*}
The Lagrange form of the interpolated polynomial is useful for the error formula (see below) and the existence-unique theorem.
Existence-Uniqueness Theorem
The following is the Existence-Uniqueness theorem (see Theorem 4.2.5) for a polynomial that passes through the points \(\{(x_{0},f(x_{0})), (x_{1}, f(x_{1})), \ldots, (x_{n}, f(x_{n})) \}\text{.}\)
Existence Uniqueness Theorem of Interpolation
If \(x_{0},x_{1},x_{2},\ldots, x_{n}\) are distinct points in the domain of \(f\text{,}\) then there exists a unique polynomial, \(P\) of degree at most \(n\) such that \(P\) interpolates \(f\text{;}\) that is
\begin{equation*} P(x_{i}) = f(x_{i}) \end{equation*}
for each \(i=0,1,2, \ldots, n\text{.}\)
One implication is that the method of finding the polynomial does not matter.
Error formula
The Error formula for polynomial interpolation was given in Theorem 4.2.6 and is repeated here:
Suppose \(x_{0}, x_{1}, x_{2}, \ldots x_{n}\) are distinct numbers in \([a,b]\) and \(f \in C^{n+1}[a,b]\text{.}\) Then for each \(x\) in \([a,b]\text{,}\) a number \(\xi(x)\) in \((a,b)\) exists with
\begin{equation*} f(x) = P(x) + \frac{f^{(n+1)}(\xi(x))}{(n+1)!}(x-x_{0})(x-x_{1}) \cdots (x-x_{n}). \end{equation*}
This results in a error term:
\begin{equation*} R(x) = f(x)-P(x) = \frac{f^{(n+1)}(\xi(x))}{(n+1)!}(x-x_{0})(x-x_{1}) \cdots (x-x_{n}). \end{equation*}
This theorem is key for determining the error (and a bound on the error) of both interpolated polynomials as well as numerical integration (the error formula is the integral of this error formula).
Newton Divided Differences
If we define
\begin{equation*} a_{k} = f[x_{0},x_{1},x_{2},\ldots, x_{k}] \qquad \text{for $k=0,1,2,\ldots,n$}. \end{equation*}
where
\begin{equation*} f[x_{i},x_{i+1}, \ldots,x_{i+k}] = \frac{f[x_{i+1},\ldots,x_{i+k}]-f[x_{i},x_{i+1},\ldots,x_{i+k-1}]}{x_{i+k}-x_{i}}. \end{equation*}
Then the interpolating polynomial can be written as
\begin{equation} P_{0,1,2,\ldots,n}(x) = \sum_{k=0}^{n} f[x_{0},x_{1},x_{2},\ldots,x_{k}] \biggl(\prod_{i=0}^{k-1}(x-x_{i}) \biggr)\tag{6.5.5} \end{equation}
The advantages of Newton’s Divided differences is that one can add extra points to interpolate after one is created. In addition, the divided differences can be calculated relatively easily using a divided difference table.
Chebyshev Interpolation
Using either the Lagrange form or Newton’s Divided Differences, the actual \(x\)-values of the points can be anything. Typically, equally-spaced points results in a poor fit to a function.
If one uses the the roots of the Chebyshev Polynomial \(T_{n+1}(x)\) as the \((n+1)\) points on \([-1,1]\) (or the transformed version to \([a,b]\)), then the result is that the error minimizes the \(\ell_{\infty}\) norm or
\begin{equation*} \begin{aligned}\max_{x \in [a,b]}|f(x)-P_{n}(x)|\end{aligned} \end{equation*}
Alternatively, the polynomial \(P_{n}(x)\) can be found by
\begin{equation*} \begin{aligned}P_{n}(x)&= \sum_{k=0}^{n}\frac{\langle f(x), T_{k}(x) \rangle_{w}}{\langle T_{k}(x), T_{k}(x) \rangle_{w}}T_{k}(x)\end{aligned} \end{equation*}
The Legendre Polynomials and the \(\ell_2\) norm
If one interpolates on the Legendre nodes, then the resulting polynomial minimizes the \(\ell_{2}\) norm. The Legendre nodes are the roots of the Legendre polynomials, \(L_{n}\) where
\begin{equation*} \begin{aligned} L_{0}(x)&\equiv 1, \\ L_{1}(x)&= x, \\ L_{n}(x)&= \frac{2n-1}{n}x L_{n-1}(x) - \frac{n-1}{n}P_{n-2}(x), \qquad \text{for $n \geq 2$} \end{aligned} \end{equation*}
In particular, to find \(P_{n}(x)\text{,}\) the \(n\)th degree polynomial that minimizes \(||f-P_{n}||_{2}\text{,}\) one interpolates on the roots of \(L_{n+1}(x)\text{.}\)
Alternatively, to find \(P_{n}(x)\text{,}\) one can exploit the orthogonal nature of the Legendre polynomials. Then,
\begin{equation*} \begin{aligned}P_{n}(x)&= \sum_{k=0}^{n}\frac{\langle f(x), L_{k}(x) \rangle}{\langle L_{k}(x), L_{k}(x) \rangle}L_{k}(x)\end{aligned} \end{equation*}