FIGURE 9.10. Distribution of quadrature nodes (left); integration step density

in the approximation of the integral 0 (±xe’γx )/(x + β)dx (right)

L

’1/6 and M2 = ’4/15 and determine, using the de¬nition, the degree of

exactness r of each formula.

b

[Hint: ¬nd r such that In (xk ) = a xk dx, for k = 0, . . . , r, and In (xj ) =

bj

x dx, for j > r.]

a

n

3. Let In (f ) = k=0 ±k f (xk ) be a Lagrange quadrature formula on n + 1

nodes. Compute the degree of exactness r of the formulae:

(a) I2 (f ) = (2/3)[2f (’1/2) ’ f (0) + 2f (1/2)],

(b) I4 (f ) = (1/4)[f (’1) + 3f (’1/3) + 3f (1/3) + f (1)].

Which is the order of in¬nitesimal p for (a) and (b)?

[Solution: r = 3 and p = 5 for both I2 (f ) and I4 (f ).]

4. Compute df [x0 , . . . , xn , x]/dx by checking (9.22).

[Hint: proceed by computing directly the derivative at x as an incremental

ratio, in the case where only one node x0 exists, then upgrade progressively

the order of the divided di¬erence.]

√

1

5. Let Iw (f ) = 0 w(x)f (x)dx with w(x) = x, and consider the quadrature

formula Q(f ) = af (x1 ). Find a and x1 in such a way that Q has maximum

degree of exactness r.

[Solution: a = 2/3, x1 = 3/5 and r = 1.]

6. Let us consider the quadrature formula Q(f ) = ±1 f (0) + ±2 f (1) + ±3 f (0)

1

for the approximation of I(f ) = 0 f (x)dx, where f ∈ C 1 ([0, 1]). Determine

the coe¬cients ±j , for j = 1, 2, 3 in such a way that Q has degree of

exactness r = 2.

[Solution: ±1 = 2/3, ±2 = 1/3 and ±3 = 1/6.]

7. Apply the midpoint, trapezoidal and Cavalieri-Simpson composite rules to

approximate the integral

1

|x|ex dx,

’1

and discuss their convergence as a function of the size H of the subintervals.

414 9. Numerical Integration

1

8. Consider the integral I(f ) = 0 ex dx and estimate the minimum number m

of subintervals that is needed for computing I(f ) up to an absolute error

¤ 5 · 10’4 using the composite trapezoidal (TR) and Cavalieri-Simpson

(CS) rules. Evaluate in both cases the absolute error Err that is actually

made.

[Solution: for TR, we have m = 17 and Err = 4.95 · 10’4 , while for CS,

m = 2 and Err = 3.70 · 10’5 .]

9. Consider the corrected trapezoidal formula (9.30) and check that |E1 (f )|

corr

corr

4|E2 (f )|, where E1 (f ) and E2 (f ) are de¬ned in (9.31) and (9.16), re-

spectively.

10. Compute, with an error less than 10’4 , the following integrals:

∞

sin(x)/(1 + x4 )dx;

(a) 0

∞

e’x (1 + x)’5 dx;

(b) 0

2

∞

cos(x)e’x dx.

(c) ’∞

11. Use the reduction midpoint and trapezoidal formulae for computing the

y

dxdy over the domain „¦ = (0, 1)2 . Run

double integral I(f ) = „¦

(1 + xy)

Programs 78 and 79 with M = 2i , for i = 0, . . . , 10 and plot in log-scale

the absolute error in the two cases as a function of M . Which method is

the most accurate? How many functional evaluations are needed to get an

(absolute) accuracy of the order of 10’6 ?

[Solution: the exact integral is I(f ) = log(4) ’ 1, and almost 2002 = 40000

functional evaluations are needed.]

10

Orthogonal Polynomials in

Approximation Theory

Trigonometric polynomials, as well as other orthogonal polynomials like

Legendre™s and Chebyshev™s, are widely employed in approximation theory.

This chapter addresses the most relevant properties of orthogonal poly-

nomials, and introduces the transforms associated with them, in particular

the discrete Fourier transform and the FFT, but also the Zeta and Wavelet

transforms.

Application to interpolation, least-squares approximation, numerical dif-

ferentiation and Gaussian integration are addressed.

10.1 Approximation of Functions by Generalized

Fourier Series

Let w = w(x) be a weight function on the interval (’1, 1), i.e., a nonneg-

ative integrable function in (’1, 1). Let us denote by {pk , k = 0, 1, . . . } a

system of algebraic polynomials, with pk of degree equal to k for each k,

mutually orthogonal on the interval (’1, 1) with respect to w. This means

that

1

pk (x)pm (x)w(x)dx = 0 if k = m.

’1

416 10. Orthogonal Polynomials in Approximation Theory

1 1/2

Set (f, g)w = ’1 f (x)g(x)w(x)dx and f w = (f, f )w ; (·, ·)w and · w

are respectively the scalar product and the norm for the function space

1

f : (’1, 1) ’ R, f 2 (x)w(x)dx < ∞ .

L2 L2 (’1, 1)

= = (10.1)

w w

’1

For any function f ∈ L2 the series

w

+∞

(f, pk )w

Sf = fk p k , with fk = ,

pk 2 w

k=0

is called the generalized Fourier series of f , and fk is the k-th Fourier

coe¬cient. As is well-known, Sf converges in average (or in the sense of

L2 ) to f . This means that, letting for any integer n

w

n

fn (x) = fk pk (x) (10.2)

k=0

(fn ∈ Pn is the truncation of order n of the generalized Fourier series of

f ), the following convergence result holds

f ’ fn

lim = 0.

w

n’+∞

Thanks to Parseval™s equality, we have

+∞

2 2 2

f = fk pk

w w

k=0

+∞

and, for any n, f ’fn 2 = k=n+1 fk pk 2 is the square of the remainder

2

w w

of the generalized Fourier series.

The polynomial fn ∈ Pn satis¬es the following minimization property

f ’ fn = min f ’ q w. (10.3)

w

q∈Pn

+∞

Indeed, since f ’ fn = k=n+1 fk pk , the property of orthogonality of

polynomials {pk } implies (f ’ fn , q)w = 0 ∀q ∈ Pn . Then, the Cauchy-

Schwarz inequality (8.29) yields

f ’ fn = (f ’ fn , f ’ fn )w = (f ’ fn , f ’ q)w + (f ’ fn , q ’ fn )w

2

w

= (f ’ fn , f ’ q)w ¤ f ’ fn f ’q ∀q ∈ Pn ,

w,

w

and (10.3) follows since q is arbitrary in Pn . In such a case, we say that fn

is the orthogonal projection of f over Pn in the sense of L2 . It is therefore

w

interesting to compute the coe¬cients fk of fn . As will be seen in later

10.1 Approximation of Functions by Generalized Fourier Series 417

sections, this is usually done by suitably approximating the integrals that

appear in the de¬nition of fk . By doing so, one gets the so-called discrete

˜

coe¬cients fk of f , and, as a consequence, the new polynomial

n

— ˜

fn (x) = fk pk (x) (10.4)

k=0

which is called the discrete truncation of order n of the Fourier series of f .

Typically,

(f, pk )n

˜

fk = , (10.5)

pk 2 n

where, for any pair of continuous functions f and g, (f, g)n is the approxi-

mation of the scalar product (f, g)w and g n = (g, g)n is the seminorm

associated with (·, ·)w . In a manner analogous to what was done for fn , it

can be checked that

—

f ’ fn = min f ’ q (10.6)

n n

q∈Pn

—

and we say that fn is the approximation to f in Pn in the least-squares

sense (the reason for using this name will be made clear later on).

We conclude this section by recalling that, for any family of monic orthog-

onal polynomials {pk }, the following recursive three-term formula holds (for

the proof, see for instance [Gau96])

pk+1 (x) = (x ’ ±k )pk (x) ’ βk pk’1 (x) k ≥ 0,

(10.7)

p’1 (x) = 0, p0 (x) = 1,

where

(xpk , pk )w (pk+1 , pk+1 )w

k ≥ 0.

±k = , βk+1 = , (10.8)

(pk , pk )w (pk , pk )w

Since p’1 = 0, the coe¬cient β0 is arbitrary and is chosen according to

the particular family of orthogonal polynomials at hand. The recursive

three-term relation is generally quite stable and can thus be conveniently

employed in the numerical computation of orthogonal polynomials, as will

be seen in Section 10.6.

In the forthcoming sections we introduce two relevant families of orthogonal

polynomials.

10.1.1 The Chebyshev Polynomials

Consider the Chebyshev weight function w(x) = (1’x2 )’1/2 on the interval

(’1, 1), and, according to (10.1), introduce the space of square-integrable

418 10. Orthogonal Polynomials in Approximation Theory

functions with respect to the weight w

1

f 2 (x)(1 ’ x2 )’1/2 dx < ∞ .

f : (’1, 1) ’ R :

L2 (’1, 1) =

w

’1

A scalar product and a norm for this space are de¬ned as

1

f (x)g(x)(1 ’ x2 )’1/2 dx,

(f, g)w =

’1

± 1/2 (10.9)

1

f 2 (x)(1 ’ x2 )’1/2 dx

f = .

w

’1

The Chebyshev polynomials are de¬ned as follows

Tk (x) = cos kθ, θ = arccos x, k = 0, 1, 2, . . . (10.10)

They can be recursively generated by the following formula (a consequence

of (10.7), see [DR75], pp. 25-26)

±

Tk+1 (x) = 2xTk (x) ’ Tk’1 (x) k = 1, 2, . . .

(10.11)

T0 (x) = 1, T1 (x) = x.

In particular, for any k ≥ 0, we notice that Tk ∈ Pk , i.e., Tk (x) is an alge-

braic polynomial of degree k with respect to x. Using well-known trigono-

metric relations, we have

c0 = π if n = 0,

(Tk , Tn )w = 0 if k = n, (Tn , Tn )w =

cn = π/2 if n = 0,

which expresses the orthogonality of the Chebyshev polynomials with re-

spect to the scalar product (·, ·)w . Therefore, the Chebyshev series of a

function f ∈ L2 takes the form

w

1

∞

1

f (x)Tk (x)(1 ’ x2 )’1/2 dx.

Cf = fk Tk , with fk =

ck

k=0 ’1

Notice that Tn = 1 for every n and the following minimax property

∞

holds

¤ min p

21’n Tn ∞,

∞ 1

p∈Pn

n

where P1 = {p(x) = k=0 ak xk , an = 1} denotes the subset of polynomials

n

of degree n with leading coe¬cient equal to 1.

10.2 Gaussian Integration and Interpolation 419

10.1.2 The Legendre Polynomials

The Legendre polynomials are orthogonal polynomials over the interval

(’1, 1) with respect to the weight function w(x) = 1. For these polynomials,

L2 is the usual L2 (’1, 1) space introduced in (8.25), while (·, ·)w and · w

w

coincide with the scalar product and norm in L2 (’1, 1), respectively given

by

« 1

2

1 1

= f 2 (x) dx .

(f, g) = f (x)g(x) dx, f L2 (’1,1)

’1 ’1

The Legendre polynomials are de¬ned as

[k/2]

2k ’ 2l

1 k

(’1)l xk’2l

Lk (x) = k k = 0, 1, . . . (10.12)

l k

2

l=0

where [k/2] is the integer part of k/2, or, recursively, through the three-

term relation

±

2k + 1 k

L

k+1 (x) = xLk (x) ’ Lk’1 (x) k = 1, 2 . . .

k+1 k+1

L0 (x) = 1, L1 (x) = x.

For every k = 0, 1 . . . , Lk ∈ Pk and (Lk , Lm ) = δkm (k + 1/2)’1 for k, m =

0, 1, 2, . . . . For any function f ∈ L2 (’1, 1), its Legendre series takes the

following form

1

∞ ’1

1

Lf = fk Lk , with fk = k+ f (x)Lk (x)dx.

2

k=0 ’1

Remark 10.1 (The Jacobi polynomials) The polynomials previously

±β

introduced belong to the wider family of Jacobi polynomials {Jk , k =

0, . . . , n}, that are orthogonal with respect to the weight w(x) = (1 ’

x)± (1 + x)β , for ±, β > ’1. Indeed, setting ± = β = 0 we recover the

Legendre polynomials, while choosing ± = β = ’1/2 gives the Chebyshev

polynomials.

10.2 Gaussian Integration and Interpolation

Orthogonal polynomials play a crucial role in devising quadrature formulae

with maximal degrees of exactness. Let x0 , . . . , xn be n + 1 given distinct

points in the interval [’1, 1]. For the approximation of the weighted integral

420 10. Orthogonal Polynomials in Approximation Theory

1

Iw (f ) = ’1 f (x)w(x)dx, being f ∈ C 0 ([’1, 1]), we consider quadrature

rules of the type

n

In,w (f ) = ±i f (xi ) (10.13)

i=0

where ±i are coe¬cients to be suitably determined. Obviously, both nodes

and weights depend on n, however this dependence will be understood.

Denoting by

En,w (f ) = Iw (f ) ’ In,w (f )

the error between the exact integral and its approximation (10.13), if

En,w (p) = 0 for any p ∈ Pr (for a suitable r ≥ 0) we shall say that for-

mula (10.13) has degree of exactness r with respect to the weight w. This

de¬nition generalizes the one given for ordinary integration with weight

w = 1.

Clearly, we can get a degree of exactness equal to (at least) n taking

1

In,w (f ) = Πn f (x)w(x)dx

’1

where Πn f ∈ Pn is the Lagrange interpolating polynomial of the function

f at the nodes {xi , i = 0, . . . , n}, given by (8.4). Therefore, (10.13) has

degree of exactness at least equal to n taking

1

±i = li (x)w(x)dx, i = 0, . . . , n, (10.14)

’1

where li ∈ Pn is the i-th characteristic Lagrange polynomial such that

li (xj ) = δij , for i, j = 0, . . . , n.

The question that arises is whether suitable choices of the nodes exist

such that the degree of exactness is greater than n, say, equal to r = n + m

for some m > 0. The answer to this question is furnished by the following

theorem, due to Jacobi [Jac26].

Theorem 10.1 For a given m > 0, the quadrature formula (10.13) has

degree of exactness n + m i¬ it is of interpolatory type and the nodal poly-

nomial ωn+1 (8.6) associated with the nodes {xi } is such that

1

∀p ∈ Pm’1 .

ωn+1 (x)p(x)w(x)dx = 0, (10.15)

’1

Proof. Let us prove that these conditions are su¬cient. If f ∈ Pn+m then

there exist a quotient πm’1 ∈ Pm’1 and a remainder qn ∈ Pn , such that f =

10.2 Gaussian Integration and Interpolation 421

ωn+1 πm’1 + qn . Since the degree of exactness of an interpolatory formula with

n + 1 nodes is equal to n (at least), we get

1 1 1

n

f (x)w(x)dx ’

±i qn (xi ) = qn (x)w(x)dx = ωn+1 (x)πm’1 (x)w(x)dx.

i=0 ’1 ’1 ’1

As a consequence of (10.15), the last integral is null, thus

1 n n

f (x)w(x)dx = ±i qn (xi ) = ±i f (xi ).

i=0 i=0

’1

Since f is arbitrary, we conclude that En,w (f ) = 0 for any f ∈ Pn+m . Proving

3

that the conditions are also necessary is an exercise left to the reader.

Corollary 10.1 The maximum degree of exactness of the quadrature for-

mula (10.13) is 2n + 1.

Proof. If this would not be true, one could take m ≥ n + 2 in the previous

theorem. This, in turn, would allow us to choose p = ωn+1 in (10.15) and come

3

to the conclusion that ωn+1 is identically zero, which is absurd.

Setting m = n + 1 (the maximum admissible value), from (10.15) we get

that the nodal polynomial ωn+1 satis¬es the relation

1

∀p ∈ Pn .

ωn+1 (x)p(x)w(x)dx = 0,

’1

Since ωn+1 is a polynomial of degree n + 1 orthogonal to all the polyno-

mials of lower degree, we conclude that ωn+1 is the only monic polynomial

multiple of pn+1 (recall that {pk } is the system of orthogonal polynomials

introduced in Section 10.1). In particular, its roots {xj } coincide with those

of pn+1 , that is

pn+1 (xj ) = 0, j = 0, . . . , n. (10.16)

The abscissae {xj } are the Gauss nodes associated with the weight func-

tion w(x). We can thus conclude that the quadrature formula (10.13) with

coe¬cients and nodes given by (10.14) and (10.16), respectively, has degree

of exactness 2n + 1, the maximum value that can be achieved using inter-

polatory quadrature formulae with n + 1 nodes, and is called the Gauss

quadrature formula.

Its weights are all positive and the nodes are internal to the interval

(’1, 1) (see, for instance, [CHQZ88], p. 56). However, it is often useful to

also include the end points of the interval among the quadrature nodes. By

422 10. Orthogonal Polynomials in Approximation Theory

doing so, the Gauss formula with the highest degree of exactness is the one

that employs as nodes the n + 1 roots of the polynomial

ω n+1 (x) = pn+1 (x) + apn (x) + bpn’1 (x), (10.17)

where the constants a and b are selected in such a way that ω n+1 (’1) =

ω n+1 (1) = 0.

Denoting these roots by x0 = ’1, x1 , . . . , xn = 1, the coe¬cients {±i , i =

0, . . . , n} can then be obtained from the usual formulae (10.14), that is

1

±i = li (x)w(x)dx, i = 0, . . . , n,

’1

where li ∈ Pn is the i-th characteristic Lagrange polynomial such that

li (xj ) = δij , for i, j = 0, . . . , n. The quadrature formula

n

GL

In,w (f ) = ±i f (xi ) (10.18)

i=0

is called the Gauss-Lobatto formula with n + 1 nodes, and has degree of

exactness 2n ’ 1. Indeed, for any f ∈ P2n’1 , there exist a polynomial

πn’2 ∈ Pn’2 and a remainder qn ∈ Pn such that f = ω n+1 πn’2 + qn .

The quadrature formula (10.18) has degree of exactness at least equal to

n (being interpolatory with n + 1 distinct nodes), thus we get

1 1 1

n

f (x)w(x)dx ’

±j qn (xj ) = qn (x)w(x)dx = ω n+1 (x)πn’2 (x)w(x)dx.

j=0 ’1 ’1 ’1

From (10.17) we conclude that ωn+1 is orthogonal to all the polynomials

¯

of degree ¤ n ’ 2, so that the last integral is null. Moreover, since f (xj ) =

qn (xj ) for j = 0, . . . , n, we conclude that

1 n

∀f ∈ P2n’1 .

f (x)w(x)dx = ±i f (xi ),

i=0

’1

Denoting by ΠGL f the polynomial of degree n that interpolates f at the

n,w

nodes {xj , j = 0, . . . , n}, we get

n

ΠGL f (x) = f (xi )li (x) (10.19)

n,w

i=0

1

GL

ΠGL f (x)w(x)dx.

and thus In,w (f ) = ’1 n,w

10.2 Gaussian Integration and Interpolation 423

Remark 10.2 In the special case where the Gauss-Lobatto quadrature is

considered with respect to the Jacobi weight w(x) = (1 ’ x)± (1 ’ x)β ,

with ±, β > ’1, the internal nodes x1 , . . . , xn’1 can be identi¬ed as the

(±,β)

roots of the polynomial (Jn ) , that is, the extremants of the n-th Jacobi

(±,β)

polynomial Jn (see [CHQZ88], pp. 57-58).

The following convergence result holds for Gaussian integration (see [Atk89],

Chapter 5)

1 n

f (x)w(x)dx ’ ∀f ∈ C 0 ([’1, 1]).

lim ±j f (xj ) = 0,

n’+∞

j=0

’1

A similar result also holds for Gauss-Lobatto integration. If the integrand

function is not only continuous, but also di¬erentiable up to the order

p ≥ 1, we shall see that Gaussian integration converges with an order of

in¬nitesimal with respect to 1/n that is larger when p is greater. In the

forthcoming sections, the previous results will be speci¬ed in the cases of

the Chebyshev and Legendre polynomials.

Remark 10.3 (Integration over an arbitrary interval) A quadrature

formula with nodes ξj and coe¬cients βj , j = 0, . . . , n over the interval

[’1, 1] can be mapped on any interval [a, b]. Indeed, let • : [’1, 1] ’ [a, b]

be the a¬ne map x = •(ξ) = a+b ξ + b’a . Then

2 2

b 1

a+b

(f —¦ •)(ξ)dξ.

f (x)dx =

2

’1

a

Therefore, we can employ on the interval [a, b] the quadrature formula with

nodes xj = •(ξj ) and weights ±j = a+b βj . Notice that this formula main-

2

tains on the interval [a, b] the same degree of exactness of the generating

formula over [’1, 1]. Indeed, assuming that

1 n

p(ξ)dξ = p(ξj )βj

j=0

’1

for any polynomial p of degree r over [’1, 1] (for a suitable integer r), for

any polynomial q of the same degree on [a, b] we get

1 b

n n

a+b a+b

(q —¦ •)(ξj )βj = (q —¦ •)(ξ)dξ =

q(xj )±j = q(x)dx,

2 j=0 2

j=0 ’1 a

having recalled that (q —¦ •)(ξ) is a polynomial of degree r on [’1, 1].

424 10. Orthogonal Polynomials in Approximation Theory

10.3 Chebyshev Integration and Interpolation

If Gaussian quadratures are considered with respect to the Chebyshev

weight w(x) = (1 ’ x2 )’1/2 , Gauss nodes and coe¬cients are given by

(2j + 1)π π

xj = ’ cos , 0 ¤ j ¤ n,

, ±j = (10.20)

2(n + 1) n+1

while Gauss-Lobatto nodes and weights are

πj π

xj = ’ cos , 0 ¤ j ¤ n, n ≥ 1,

, ±j = (10.21)

n dj n

where d0 = dn = 2 and dj = 1 for j = 1, . . . , n ’ 1. Notice that the Gauss

nodes (10.20) are, for a ¬xed n ≥ 0, the zeros of the Chebyshev polynomial

Tn+1 ∈ Pn+1 , while, for n ≥ 1, the internal nodes {¯j , j = 1, . . . , n ’ 1}

x

are the zeros of Tn , as anticipated in Remark 10.2.

Denoting by ΠGL f the polynomial of degree n + 1 that interpolates f

n,w

at the nodes (10.21), it can be shown that the interpolation error can be

bounded as

¤ Cn’s f

f ’ ΠGL f for s ≥ 1,

s,w , (10.22)

w

n,w

where · w is the norm in L2 de¬ned in (10.9), provided that for some

w

s ≥ 1 the function f has derivatives f (k) of order k = 0, . . . , s in L2 . In

w

such a case

1

s 2

f (k) 2

f = . (10.23)

s,w w

k=0

Here and in the following, C is a constant independent of n that can assume

di¬erent values at di¬erent places. In particular, for any continuous function

f the following pointwise error estimate can be derived (see Exercise 3)

f (x) ’ ΠGL f (x) ¤ Cn1/2’s f s,w . (10.24)

∞

n,w

Thus, ΠGL f converges pointwise to f as n ’ ∞, for any f ∈ C 1 ([’1, 1]).

n,w

The same kind of results (10.22) and (10.24) hold if ΠGL f is replaced with

n,w

G

the polynomial Πn f of degree n that interpolates f at the n+1 Gauss nodes

xj in (10.20). (For the proof of these results see, for instance, [CHQZ88],

p. 298, or [QV94], p. 112). We have also the following result (see [Riv74],

p.13)

2

—

f ’ ΠG f ¤ (1 + Λn )En (f ), with Λn ¤ log(n + 1) + 1, (10.25)

∞

n

π

—

where ∀n, En (f ) = inf f ’ p is the best approximation error for f

∞

p∈Pn

in Pn and Λn is the Lebesgue constant associated with the Chebyshev

10.3 Chebyshev Integration and Interpolation 425

nodes (10.20). As far as the numerical integration error is concerned, let

us consider, for instance, the Gauss-Lobatto quadrature rule (10.18) with

nodes and weights given in (10.21). First of all, notice that

1

f (x)(1 ’ x2 )’1/2 dx = lim In,w (f )

GL

n’∞

’1

for any function f whose left integral is ¬nite (see [Sze67], p. 342). If,

moreover, f s,w is ¬nite for some s ≥ 1, we have

1

f (x)(1 ’ x2 )’1/2 dx ’ In,w (f ) ¤ Cn’s f

GL

s,w . (10.26)

’1

This result follows from the more general one

|(f, vn )w ’ (f, vn )n | ¤ Cn’s f ∀vn ∈ Pn ,

vn w, (10.27)

s,w

where the so-called discrete scalar product has been introduced

n

GL

(f, g)n = ±j f (xj )g(xj ) = In,w (f g). (10.28)

j=0

Actually, (10.26) follows from (10.27) setting vn ≡ 1 and noticing that

√

1/2

1 2 ’1/2

vn w = ’1 (1 ’ x ) dx = π. Thanks to (10.26) we can thus

conclude that the (Chebyshev) Gauss-Lobatto formula has degree of ex-

actness 2n ’ 1 and order of accuracy (with respect to n’1 ) equal to s,

provided that f s,w < ∞. Therefore, the order of accuracy is only limited

by the regularity threshold s of the integrand function. Completely similar

considerations can be drawn for (Chebyshev) Gauss formulae with n + 1

nodes.

˜

Let us ¬nally determine the coe¬cients fk , k = 0, . . . , n, of the interpolat-

ing polynomial ΠGL f at the n + 1 Gauss-Lobatto nodes in the expansion

n,w

with respect to the Chebyshev polynomials (10.10)

n

˜

ΠGL f (x) = fk Tk (x). (10.29)

n,w

k=0

Notice that ΠGL f coincides with the discrete truncation of the Chebyshev

n,w

—

series fn de¬ned in (10.4). Enforcing the equality ΠGL f (xj ) = f (xj ), j =

n,w

0, . . . , n, we ¬nd

n

kjπ ˜

f (xj ) = cos fk , j = 0, . . . , n. (10.30)

n

k=0

426 10. Orthogonal Polynomials in Approximation Theory

Recalling the exactness of the Gauss-Lobatto quadrature, it can be checked

that (see Exercise 2)

n

2 1 kjπ

˜

fk = cos f (xj ), k = 0, . . . , n. (10.31)

ndk j=0 dj n

˜

Relation (10.31) yields the discrete coe¬cients {fk , k = 0, . . . , n} in terms

of the nodal values {f (xj ), j = 0, . . . , n}. For this reason it is called

the Chebyshev discrete transform (CDT) and, thanks to its trigonomet-

ric structure, it can be e¬ciently computed using the FFT algorithm (Fast

Fourier transform) with a number of ¬‚oating-point operations of the order

of n log2 n (see Section 10.9.2). Of course, (10.30) is the inverse of the CDT,

and can be computed using the FFT.

10.4 Legendre Integration and Interpolation

As previously noticed, the Legendre weight is w(x) ≡ 1. For n ≥ 0, the

Gauss nodes and the related coe¬cients are given by

2

xj zeros of Ln+1 (x), ±j = , j = 0, . . . , n, (10.32)

(1 ’ x2 )[Ln+1 (xj )]2

j

while the Gauss-Lobatto ones are, for n ≥ 1

x0 = ’1, xn = 1, xj zeros of Ln (x), j = 1, . . . , n ’ 1 (10.33)

2 1

±j = , j = 0, . . . , n (10.34)

n(n + 1) [Ln (xj )]2

where Ln is the n-th Legendre polynomial de¬ned in (10.12). It can be

checked that, for a suitable constant C independent of n,

2 C

¤ ±j ¤ , ∀j = 0, . . . , n

n(n + 1) n

(see [BM92], p. 76). Then, letting ΠGL f be the polynomial of degree n that

n

interpolates f at the n + 1 nodes xj given by (10.33), it can be proved that

it ful¬lls the same error estimates as those reported in (10.22) and (10.24)

in the case of the corresponding Chebyshev polynomial.

Of course, the norm · w must here be replaced by the norm · L2 (’1,1) ,

while f s,w becomes

1

s 2

f (k) 2

f = . (10.35)

L2 (’1,1)

s

k=0

10.4 Legendre Integration and Interpolation 427

The same kinds of results are ensured if ΠGL f is replaced by the polynomial

n

of degree n that interpolates f at the n + 1 nodes xj given by (10.32).

Referring to the discrete scalar product de¬ned in (10.28), but taking

now the nodes and coe¬cients given by (10.33) and (10.34), we see that

(·, ·)n is an approximation of the usual scalar product (·, ·) of L2 (’1, 1).

Actually, the equivalent relation to (10.27) now reads

|(f, vn ) ’ (f, vn )n | ¤ Cn’s f ∀vn ∈ Pn

vn L2 (’1,1) , (10.36)

s

and holds for any s ≥ 1 such that f s < ∞. In particular, setting vn ≡ 1,

√

we get vn = 2, and from (10.36) it follows that

1

f (x)dx ’ In (f ) ¤ Cn’s f

GL

(10.37)

s

’1

which demonstrates a convergence of the Gauss-Legendre-Lobatto quadra-

ture formula to the exact integral of f with order of accuracy s with respect

to n’1 provided that f s < ∞. A similar result holds for the Gauss-

Legendre quadrature formulae.

Example 10.1 Consider the approximate evaluation of the integral of f (x) =

3

|x|±+ 5 over [’1, 1] for ± = 0, 1, 2. Notice that f has “piecewise” derivatives up

to order s = s(±) = ± + 1 in L2 (’1, 1). Figure 10.1 shows the behavior of the

error as a function of n for the Gauss-Legendre quadrature formula. According

to (10.37), the convergence rate of the formula increases by one when ± increases

•

by one.

2

10

0

10

’2

10

’4

10

’6

10

’8

10

’10

10 0 1 2 3

10 10 10 10

FIGURE 10.1. The quadrature error in logarithmic scale as a function of n in the

case of a function with the ¬rst s derivatives in L2 (’1, 1) for s = 1 (solid line),

s = 2 (dashed line), s = 3 (dotted line)

428 10. Orthogonal Polynomials in Approximation Theory

The interpolating polynomial at the nodes (10.33) is given by

n

˜

ΠGL f (x) = fk Lk (x). (10.38)

n

k=0

Notice that also in this case ΠGL f coincides with the discrete truncation

n

—

of the Legendre series fn de¬ned in (10.4). Proceeding as in the previous

section, we get

n

˜

f (xj ) = fk Lk (xj ), j = 0, . . . , n, (10.39)

k=0

and also

± n

2k + 1 1

f (xj ), k = 0, . . . , n ’ 1,

n(n + 1) Lk (xj ) 2

Ln (xj )

j=0

˜

fk = (10.40)

n

1 1

f (xj ), k=n

n+1 Ln (xj )

j=0

(see Exercise 6). Formulae (10.40) and (10.39) provide, respectively, the

discrete Legendre transform (DLT) and its inverse.

10.5 Gaussian Integration over Unbounded

Intervals

We consider integration on both half and on the whole of real axis. In both

cases we use interpolatory Gaussian formulae whose nodes are the zeros of

Laguerre and Hermite orthogonal polynomials, respectively.

The Laguerre polynomials. These are algebraic polynomials, orthogonal

on the interval [0, +∞) with respect to the weight function w(x) = e’x .

They are de¬ned by

dn ’x n

Ln (x) = e n ≥ 0,

x

(e x ),

dxn

and satisfy the following three-term recursive relation

Ln+1 (x) = (2n + 1 ’ x)Ln (x) ’ n2 Ln’1 (x) n ≥ 0,

L’1 = 0, L0 = 1.

∞

For any function f , de¬ne •(x) = f (x)ex . Then, I(f ) = 0 f (x)dx =

∞ ’x

e •(x)dx, so that it su¬ces to apply to this last integral the Gauss-

0

Laguerre quadratures, to get, for n ≥ 1 and f ∈ C 2n ([0, +∞))

n

(n!)2 (2n)

I(f ) = ±k •(xk ) + • (ξ), 0 < ξ < +∞, (10.41)

(2n)!

k=1

10.6 Programs for the Implementation of Gaussian Quadratures 429

where the nodes xk , for k = 1, . . . , n, are the zeros of Ln and the weights

are ±k = (n!)2 xk /[Ln+1 (xk )]2 . From (10.41), one concludes that Gauss-

Laguerre formulae are exact for functions f of the type •e’x , where • ∈

P2n’1 . In a generalized sense, we can then state that they have optimal

degrees of exactness equal to 2n ’ 1.

Example 10.2 Using a Gauss-Laguerre quadrature formula with n = 12 to com-

pute the integral in Example 9.12 we obtain the value 0.5997 with an absolute

error with respect to exact integration equal to 2.96 · 10’4 . For the sake of com-

parison, the composite trapezoidal formula would require 277 nodes to obtain the

•

same accuracy.

The Hermite polynomials. These are orthogonal polynomials on the

2

real line with respect to the weight function w(x) = e’x . They are de¬ned

by

n

n x2 d 2

(e’x ),

Hn (x) = (’1) e n ≥ 0.

dxn

Hermite polynomials can be recursively generated as

Hn+1 (x) = 2xHn (x) ’ 2nHn’1 (x) n ≥ 0,

H’1 = 0, H0 = 1.

∞

2

As in the previous case, letting •(x) = f (x)ex , we have I(f ) = f (x)dx =

’∞

∞ 2

e’x •(x)dx. Applying to this last integral the Gauss-Hermite quadra-

’∞

tures we obtain, for n ≥ 1 and f ∈ C 2n (R)

√

∞ n

(n!) π (2n)

2

e’x •(x)dx = ξ ∈ R,

I(f ) = ±k •(xk ) + n • (ξ),

2 (2n)!

k=1

’∞

(10.42)

where the nodes √ k , for k = 1, . . . , n, are the zeros of Hn and the weights

x

n+1

n! π/[Hn+1 (xk )]2 . As for Gauss-Laguerre quadratures, the

are ±k = 2

2

Gauss-Hermite rules also are exact for functions f of the form •e’x , where

• ∈ P2n’1 ; therefore, they have optimal degrees of exactness equal to 2n’1.

More details on the subject can be found in [DR75], pp. 173-174.

10.6 Programs for the Implementation of Gaussian

Quadratures

Programs 82, 83 and 84 compute the coe¬cients {±k } and {βk }, introduced

in (10.8), in the cases of the Legendre, Laguerre and Hermite polynomials.

These programs are then called by Program 85 for the computation of nodes

430 10. Orthogonal Polynomials in Approximation Theory

and weights (10.32), in the case of the Gauss-Legendre formulae, and by

Programs 86, 87 for computing nodes and weights in the Gauss-Laguerre

and Gauss-Hermite quadrature rules (10.41) and (10.42). All the codings

reported in this section are excerpts from the library ORTHPOL [Gau94].

Program 82 - coe¬‚ege : Coe¬cients of Legendre polynomials

function [a, b] = coe¬‚ege(n)

if (n <= 1), disp(™ n must be > 1 ™); return; end

for k=1:n, a(k)=0; b(k)=0; end; b(1)=2;

for k=2:n, b(k)=1/(4-1/(k-1)ˆ2); end

Program 83 - coe¬‚agu : Coe¬cients of Laguerre polynomials

function [a, b] = coe¬‚agu(n)

if (n <= 1), disp(™ n must be > 1 ™); return; end

a=zeros(n,1); b=zeros(n,1); a(1)=1; b(1)=1;

for k=2:n, a(k)=2*(k-1)+1; b(k)=(k-1)ˆ2; end

Program 84 - coefherm : Coe¬cients of Hermite polynomials

function [a, b] = coefherm(n)

if (n <= 1), disp(™ n must be > 1 ™); return; end

a=zeros(n,1); b=zeros(n,1); b(1)=sqrt(4.*atan(1.));

for k=2:n, b(k)=0.5*(k-1); end

Program 85 - zplege : Coe¬cients of Gauss-Legendre formulae

function [x,w]=zplege(n)

if (n <= 1), disp(™ n must be > 1 ™); return; end

[a,b]=coe¬‚ege(n);

JacM=diag(a)+diag(sqrt(b(2:n)),1)+diag(sqrt(b(2:n)),-1);

[w,x]=eig(JacM); x=diag(x); scal=2; w=w(1,:)™.ˆ2*scal;

[x,ind]=sort(x); w=w(ind);

Program 86 - zplagu : Coe¬cients of Gauss-Laguerre formulae

function [x,w]=zplagu(n)

if (n <= 1), disp(™ n must be > 1 ™); return; end

[a,b]=coe¬‚agu(n);

JacM=diag(a)+diag(sqrt(b(2:n)),1)+diag(sqrt(b(2:n)),-1);

[w,x]=eig(JacM); x=diag(x); w=w(1,:)™.ˆ2;

Program 87 - zpherm : Coe¬cients of Gauss-Hermite formulae

function [x,w]=zpherm(n)

if (n <= 1), disp(™ n must be > 1 ™); return; end

10.7 Approximation of a Function in the Least-Squares Sense 431

[a,b]=coefherm(n);

JacM=diag(a)+diag(sqrt(b(2:n)),1)+diag(sqrt(b(2:n)),-1);

[w,x]=eig(JacM); x=diag(x); scal=sqrt(pi); w=w(1,:)™.ˆ2*scal;

[x,ind]=sort(x); w=w(ind);

10.7 Approximation of a Function in the

Least-Squares Sense

Given a function f ∈ L2 (a, b), we look for a polynomial rn of degree ¤ n

that satis¬es

f ’ rn = min f ’ pn w,

w

pn ∈Pn

where w is a ¬xed weight function in (a, b). Should it exist, rn is called a

least-squares polynomial. The name derives from the fact that, if w ≡ 1, rn

is the polynomial that minimizes the mean-square error E = f ’rn L2 (a,b)

(see Exercise 8).

As seen in Section 10.1, rn coincides with the truncation fn of order

n of the Fourier series (see (10.2) and (10.3)). Depending on the choice

of the weight w(x), di¬erent least-squares polynomials arise with di¬erent

convergence properties.

—

Analogous to Section 10.1, we can introduce the discrete truncation fn

(10.4) of the Chebyshev series (setting pk = Tk ) or the Legendre series

(setting pk = Lk ). If the discrete scalar product induced by the Gauss-

˜

Lobatto quadrature rule (10.28) is used in (10.5) then the fk ™s coincide with

the coe¬cients of the expansion of the interpolating polynomial ΠGL f (see

n,w

(10.29) in the Chebyshev case, or (10.38) in the Legendre case).

—

Consequently, fn = ΠGL f , i.e., the discrete truncation of the (Cheby-

n,w

shev or Legendre) series of f turns out to coincide with the interpolating

polynomial at the n + 1 Gauss-Lobatto nodes. In particular, in such a case

—

(10.6) is trivially satis¬ed, since f ’ fn n = 0.

10.7.1 Discrete Least-Squares Approximation

Several applications require representing in a synthetic way, using elemen-

tary functions, a large set of data that are available at a discrete level, for

instance, the results of experimental measurements. This approximation

process, often referred to as data ¬tting, can be satisfactorily solved using

the discrete least-squares technique that can be formulated as follows.

Assume we are given m + 1 pairs of data

{(xi , yi ), i = 0, . . . , m} (10.43)

432 10. Orthogonal Polynomials in Approximation Theory

where yi may represent, for instance, the value of a physical quantity mea-

sured at the position xi . We assume that all the abscissae are distinct.

n

We look for a polynomial pn (x) = ai •i (x) such that

i=0

m m

wj |pn (xj ) ’ yj | ¤ wj |qn (xj ) ’ yj |2 ∀qn ∈ Pn ,

2

(10.44)

j=0 j=0

for suitable coe¬cients wj > 0. If n = m the polynomial pn clearly co-

incides with the interpolating polynomial of degree n at the nodes {xi }.

Problem (10.44) is called a discrete least-squares problem since a discrete

scalar product is involved, and is the discrete counterpart of the contin-

uous least-squares problem. The solution pn is therefore referred to as a

least-squares polynomial. Notice that

± 1/2

m

|||q||| = wj [q(xj )]2 (10.45)

j=0

is an essentially strict seminorm on Pn (see, Exercise 7). By de¬nition

a discrete norm (or seminorm) · — is essentially strict if f + g — =

f — + g — implies there exist nonnull ±, β such that ±f (xi ) + βg(xi ) = 0

for i = 0, . . . , m. Since ||| · ||| is an essentially strict seminorm, problem

(10.44) admits a unique solution (see, [IK66], Section 3.5). Proceeding as

in Section 3.13, we ¬nd

n m m

∀i = 0, . . . , n,

ak wj •k (xj )•i (xj ) = wj yj •i (xj ),

j=0 j=0

k=0

which is called a system of normal equations, and can be conveniently

written in the form

BT Ba = BT y, (10.46)

where B is the rectangular matrix (m+1)—(n+1) of entries bij = •j (xi ), i =

0, . . . , m, j = 0, . . . , n, a ∈ Rn+1 is the vector of the unknown coe¬cients

and y ∈ Rm+1 is the vector of data.

Notice that the system of normal equations obtained in (10.46) is of

the same nature as that introduced in Section 3.13 in the case of over-

determined systems. Actually, if wj = 1 for j = 0, . . . , m, the above system

can be regarded as the solution in the least-squares sense of the system

n

ak •k (xi ) = yi , i = 0, 1, . . . , m,

k=0

10.8 The Polynomial of Best Approximation 433

which would not admit a solution in the classical sense, since the number of

rows is greater than the number of columns. In the case n = 1, the solution

to (10.44) is a linear function, called linear regression for the data ¬tting

of (10.43). The associated system of normal equations is

1 m m

wj •i (xj )•k (xj )ak = wj •i (xj )yj , i = 0, 1.

k=0 j=0 j=0

m

Setting (f, g)m = f (xj )g(xj ) the previous system becomes

j=0

(•0 , •0 )m a0 + (•1 , •0 )m a1 = (y, •0 )m ,

(•0 , •1 )m a0 + (•1 , •1 )m a1 = (y, •1 )m ,

where y(x) is a function that takes the value yi at the nodes xi , i =

0, . . . , m. After some algebra, we get this explicit form for the coe¬cients

(y, •0 )m (•1 , •1 )m ’ (y, •1 )m (•1 , •0 )m

a0 = ,

(•1 , •1 )m (•0 , •0 )m ’ (•0 , •1 )2

m

(y, •1 )m (•0 , •0 )m ’ (y, •0 )m (•1 , •0 )m

a1 = .

(•1 , •1 )m (•0 , •0 )m ’ (•0 , •1 )2

m

Example 10.3 As already seen in Example 8.2, small changes in the data can

give rise to large variations on the interpolating polynomial of a given function

f . This doesn™t happen for the least-squares polynomial where m is much larger

than n. As an example, consider the function f (x) = sin(2πx) in [’1, 1] and

evaluate it at the 22 equally spaced nodes xi = 2i/21, i = 0, . . . , 21, setting

fi = f (xi ). Then, suppose to add to the data fi a random perturbation of the

order of 10’3 and denote by p5 and p5 the least-squares polynomials of degree

˜

˜ , respectively. The maximum norm of the

5 approximating the data fi and fi

di¬erence p5 ’ p5 over [’1, 1] is of the order of 10’3 , i.e., it is of the same order

˜

as the perturbation on the data. For comparison, the same di¬erence in the case

•

of Lagrange interpolation is about equal to 2 as can be seen in Figure 10.2.

10.8 The Polynomial of Best Approximation

Consider a function f ∈ C 0 ([a, b]). A polynomial p— ∈ Pn is said to be the

n

polynomial of best approximation of f if it satis¬es

f ’ p— = min f ’ pn ∀pn ∈ Pn

∞, (10.47)

∞

n

pn ∈Pn

where g ∞ = maxa¤x¤b |g(x)|. This problem is referred to as a minimax

approximation, as we are looking for the minimum error measured in the

maximum norm.

434 10. Orthogonal Polynomials in Approximation Theory

2

1.5

1

0.5

0

’0.5

’1

’1.5

’2

’1 ’0.5 0 0.5 1

FIGURE 10.2. The perturbed data (circles), the associated least-squares polyno-

mial of degree 5 (solid line) and the Lagrange interpolating polynomial (dashed

line)

Property 10.1 (Chebyshev equioscillation theorem) For any n ≥ 0,

the polynomial of best approximation p— of f exists and is unique. More-

n

over, in [a, b] there exist n + 2 points x0 < x1 < . . . < xn+1 such that

f (xj ) ’ p— (xj ) = σ(’1)j En (f ),

—

j = 0, . . . , n + 1

n

with σ = 1 or σ = ’1 depending on f and n, and En (f ) = f ’ p—

—

∞.

n

(For the proof, see [Dav63], Chapter 7). As a consequence, there exist n + 1

points x0 < x1 < . . . < xn , with xk < xk < xk+1 for k = 0, . . . , n, to be

˜ ˜ ˜ ˜

determined in [a, b] such that

p— (˜j ) = f (˜j ), j = 0, 1, . . . , n,

nx x

so that the best approximation polynomial is a polynomial of degree n that

interpolates f at n + 1 unknown nodes.

—

The following result yields an estimate of En (f ) without explicitly com-

puting p— (we refer for the proof to [Atk89], Chapter 4).

n

Property 10.2 (de la Vall´e-Poussin theorem) Let n ≥ 0 and let x0 <

e

x1 < . . . < xn+1 be n + 2 points in [a, b]. If there exists a polynomial qn of

degree ¤ n such that

f (xj ) ’ qn (xj ) = (’1)j ej j = 0, 1, . . . , n + 1

where all ej have the same sign and are non null, then

—

min |ej | ¤ En (f ).

0¤j¤n+1

—

We can now relate En (f ) with the interpolation error. Indeed,

¤ f ’ p— + p— ’ Πn f

f ’ Πn f ∞.

∞ ∞

n n

10.9 Fourier Trigonometric Polynomials 435

On the other hand, using the Lagrange representation of p— we get

n

n n

p— (p— (xi ) p—

’ Πn f ’ f (xi ))li ¤ ’f |li |

= ∞,

∞ ∞ ∞

n n n

i=0 i=0

from which it follows

—

f ’ Πn f ¤ (1 + Λn )En (f ),

∞

where Λn is the Lebesgue constant (8.11) associated with the nodes {xi }.

Thanks to (10.25) we can conclude that the Lagrange interpolating poly-

nomial on the Chebyshev nodes is a good approximation of p— . The above

n

results yield a characterization of the best approximation polynomial, but

do not provide a constructive way for generating it. However, starting from

the Chebyshev equioscillation theorem, it is possible to devise an algorithm,

called the Remes algorithm, that is able to construct an arbitrarily good

approximation of the polynomial p— (see [Atk89], Section 4.7).

n

10.9 Fourier Trigonometric Polynomials

Let us apply the theory developed in the previous sections to a particular

family of orthogonal polynomials which are no longer algebraic polynomials

but rather trigonometric. The Fourier polynomials on (0, 2π) are de¬ned

as

k = 0, ±1, ±2, . . .

•k (x) = eikx ,

where i is the imaginary unit. These are complex-valued periodic functions

with period equal to 2π. We shall use the notation L2 (0, 2π) to denote the

complex-valued functions that are square integrable over (0, 2π). Therefore

2π

f : (0, 2π) ’ C such that |f (x)|2 dx < ∞

2

L (0, 2π) =

0

with scalar product and norm de¬ned respectively by

2π

(f, g) = f (x)g(x)dx, f = (f, f ).

L2 (0,2π)

0

If f ∈ L2 (0, 2π), its Fourier series is

2π

∞

1 1

f (x)e’ikx dx = (f, •k ). (10.48)

Ff = fk •k , with fk =

2π 2π

k=’∞ 0

If f is complex-valued we set f (x) = ±(x) + iβ(x) for x ∈ [0, 2π], where

±(x) is the real part of f and β(x) is the imaginary one. Recalling that

436 10. Orthogonal Polynomials in Approximation Theory

e’ikx = cos(kx) ’ i sin(kx) and letting

2π

1

ak = [±(x) cos(kx) + β(x) sin(kx)] dx

2π

0

2π

1

bk = [’±(x) sin(kx) + β(x) cos(kx)] dx,

2π

0

the Fourier coe¬cients of the function f can be written as

∀k = 0, ±1, ±2, . . . .

fk = ak + ibk (10.49)

We shall assume henceforth that f is a real-valued function; in such a case

f’k = fk for any k.

Let N be an even positive integer. Analogously to what was done in

Section 10.1, we call the truncation of order N of the Fourier series the

function

N

’1

2

—

fk eikx .

fN (x) =

k=’ N

2

The use of capital N instead of small n is to conform with the notation usu-

ally adopted in the analysis of discrete Fourier series (see [Bri74], [Wal91]).

To simplify the notations we also introduce an index shift so that

N ’1

N

—

fk ei(k’ 2 )x ,

fN (x) =

k=0

where now

2π

1 1

f (x)e’i(k’N/2)x dx = (f, •k ), k = 0, . . . , N ’ 1 (10.50)

fk =

2π 2π

0

and •k = e’i(k’N/2)x . Denoting by

SN = span{•k , 0 ¤ k ¤ N ’ 1},

if f ∈ L2 (0, 2π) its truncation of order N satis¬es the following optimal

approximation property in the least-squares sense

—

f ’ fN = min f ’ g L2 (0,2π) .

L2 (0,2π)

g∈SN

Set h = 2π/N and xj = jh, for j = 0, . . . , N ’ 1, and introduce the

following discrete scalar product

N ’1

(f, g)N = h f (xj )g(xj ). (10.51)

j=0

10.9 Fourier Trigonometric Polynomials 437

Replacing (f, •k ) in (10.50) with (f, •k )N , we get the discrete Fourier

coe¬cients of the function f

N ’1 N ’1

1 1 (k’ N )j

’ikjh ijπ 2

fk = f (xj )e e = f (xj )WN (10.52)

N N

j=0 j=0

for k = 0, . . . , N ’ 1, where

2π

WN = exp ’i

N

is the principal root of order N of unity. According to (10.4), the trigono-

metric polynomial

N ’1

N

ΠF f (x) fk ei(k’ 2 )x

= (10.53)

N

k=0

is called the discrete Fourier series of order N of f .

Lemma 10.1 The following property holds

N ’1

e’ik(l’j)h = 2πδjl , 0 ¤ l, j ¤ N ’ 1,

(•l , •j )N = h (10.54)

k=0

where δjl is the Kronecker symbol.

Proof. For l = j the result is immediate. Thus, assume l = j; we have that

N

1 ’ e’i(l’j)h

N ’1

e’ik(l’j)h = = 0.

1 ’ e’i(l’j)h

k=0

Indeed, the numerator is 1 ’ (cos(2π(l ’ j)) ’ i sin(2π(l ’ j))) = 1 ’ 1 = 0,

while the denominator cannot vanish. Actually, it vanishes i¬ (j ’ l)h = 2π, i.e.,

j ’ l = N , which is impossible. 3

Thanks to Lemma 10.1, the trigonometric polynomial ΠF f is the Fourier

N

interpolate of f at the nodes xj , that is

j = 0, 1, . . . , N ’ 1.

ΠF f (xj ) = f (xj ),

N

Indeed, using (10.52) and (10.54) in (10.53) it follows that

N ’1 N ’1 N ’1

1

ikjh ’ijh N

e’ik(l’j)h = f (xj ).

ΠF f (xj ) = fk e e = f (xl )