<<

. 15
( 26)



>>

comprises only the vertices P0,n’1 (t) and P1,n’1 (t). It can be shown that
P0,n (t) = (1 ’ t)P0,n’1 (t) + tP1,n’1 (t) = Bn (P0 , P1 , . . . , Pn , t),
that is, P0,n (t) is equal to the value of the B´zier curve Bn at the points
e
corresponding to the ¬xed value of t. Repeating the process for several val-
ues of the parameter t yields the construction of the curve in the considered
region of the plane.


8


6


4


2


0


’2


’4
2 4 6 8 10 12 14 16


FIGURE 8.14. Computation of the value of B3 relative to the points (0,0), (4,7),
(14,7), (17,0) for t = 0.5, using the graphical method described in the text

Notice that, for a given node con¬guration, several curves can be con-
structed according to the ordering of points Pi . Moreover, the B´zier curve
e
8.7 Splines in Parametric Form 361

Bn (P0 , P1 , . . . , Pn , t) coincides with Bn (Pn , Pn’1 , . . . , P0 , t), apart from
the orientation.

Program 69 computes bn,k at the point x for x ∈ [0, 1].

Program 69 - bernstein : Bernstein polynomials

function [bnk]=bernstein (n,k,x)
if k == 0,
C = 1;
else,
C = prod ([1:n])/( prod([1:k])*prod([1:n-k]));
end
bnk = C * xˆk * (1-x)ˆ(n-k);

Program 70 plots the B´zier curve relative to the set of points (x, y).
e

Program 70 - bezier : B´zier curves
e

function [bezx,bezy] = bezier (x, y, n)
i = 0; k = 0;
for t = 0:0.01:1,
i = i + 1; bnk = bernstein (n,k,t); ber(i) = bnk;
end
bezx = ber * x (1); bezy = ber * y (1);
for k = 1:n
i = 0;
for t = 0:0.01:1
i = i + 1; bnk = bernstein (n,k,t); ber(i) = bnk;
end
bezx = bezx + ber * x (k+1); bezy = bezy + ber * y (k+1);
end
plot(bezx,bezy)


In practice, the B´zier curves are rarely used since they do not provide a
e
su¬ciently accurate approximation to the characteristic polygon. For this
reason, in the 70™s the parametric B-splines were introduced, and they are
used in (8.58) instead of the Bernstein polynomials. Parametric B-splines
are widely employed in packages for computer graphics since they enjoy
the following properties:

1. perturbing a single vertex of the characteristic polygon yields a local
perturbation of the curve only around the vertex itself;

2. the parametric B-spline better approximates the control polygon than
the corresponding B´zier curve does, and it is always contained within
e
the convex hull of the polygon.
362 8. Polynomial Interpolation

In Figure 8.15 a comparison is made between B´zier curves and para-
e
metric B-splines for the approximation of a given characteristic polygon.




FIGURE 8.15. Comparison of a B´zier curve (left) and a parametric B-spline
e
(right). The vertices of the characteristic polygon are denoted by —


We conclude this section by noticing that parametric cubic B-splines
allow for obtaining locally straight lines by aligning four consecutive ver-
tices (see Figure 8.16) and that a parametric B-spline can be constrained
at a speci¬c point of the characteristic polygon by simply making three
consecutive points of the polygon coincide with the desired point.




FIGURE 8.16. Some parametric B-splines as functions of the number and posi-
tions of the vertices of the characteristic polygon. Notice in the last ¬gure (right)
the localization e¬ects due to moving a single vertex




8.8 Applications
In this section we consider two problems arising from the solution of fourth-
order di¬erential equations and from the reconstruction of images in axial
tomographies.
8.8 Applications 363

8.8.1 Finite Element Analysis of a Clamped Beam
Let us employ piecewise Hermite polynomials (see Section 8.4) for the nu-
merical approximation of the transversal bending of a clamped beam. This
problem was already considered in Section 4.7.2 where centered ¬nite dif-
ferences were used.
The mathematical model is the fourth-order boundary value problem
(4.74), here presented in the following general formulation

(±(x)u (x)) = f (x), 0 < x < L
(8.59)
u(0) = u(L) = 0, u (0) = u (L) = 0.

In the particular case of (4.74) we have ± = EJ and f = P ; we assume
henceforth that ± is a positive and bounded function over (0, L) and that
f ∈ L2 (0, L).
We multiply (8.59) by a su¬ciently smooth arbitrary function v, then,
we integrate by parts twice, to obtain
L L
L L
±u v dx ’ [±u v]0 + [±u v ]0 = f vdx.
0 0

Problem (8.59) is then replaced by the following problem in integral form
L L

¬nd u ∈ V such that ∀v ∈ V, (8.60)
±u v dx = f vdx,
0 0

where

V = v : v (k) ∈ L2 (0, L), k = 0, 1, 2, v (k) (0) = v (k) (L) = 0, k = 0, 1 .

Problem (8.60) admits a unique solution, which represents the deformed
con¬guration that minimizes the total potential energy of the beam over
the space V (see, for instance, [Red86], p. 156)
L
1
±(u )2 ’ f u dx.
J(u) =
2
0

In view of the numerical solution of problem (8.60), we introduce a partition
Th of [0, L] into K subintervals Tk = [xk’1 , xk ], (k = 1, . . . , K) of uniform
length h = L/K, with xk = kh, and the ¬nite dimensional space

vh ∈ C 1 ([0, L]), vh |T ∈ P3 (T )
Vh =
(8.61)
(k) (k)
∀T ∈ Th , vh (0) = vh (L) = 0, k = 0, 1 .
364 8. Polynomial Interpolation

Let us equip Vh with a basis. For this purpose, we associate with each
internal node xi (i = 1, . . . , K ’ 1) a support σi = Ti ∪ Ti+1 and two
functions •i , ψi de¬ned as follows: for any k, •i |Tk ∈ P3 (Tk ), ψi |Tk ∈ P3 (Tk )
and for any j = 0, . . . , K,
±
 •i (xj ) = δij , •i (xj ) = 0,
(8.62)
 ψi (xj ) = 0, ψi (xj ) = δij .

Notice that the above functions belong to Vh and de¬ne a basis

Bh = {•i , ψi , i = 1, . . . , K ’ 1}. (8.63)

ˆ
These basis functions can be brought back to the reference interval T =
ˆ
[0, 1] for 0 ¤ x ¤ 1, by the a¬ne maps x = hˆ + xk’1 between T and Tk ,
ˆ x
for k = 1, . . . , K.
(0)
ˆ
Therefore, let us introduce on the interval T the basis functions •0ˆ
(1) (0) (1)
and •0 , associated with the node x = 0, and •1 and •1 , associated
ˆ ˆ ˆ ˆ
with node x = 1. Each of these is of the form • = a0 + a1 x + a2 x2 +
ˆ ˆ ˆ ˆ
3
a3 x ; in particular, the functions with superscript “0” must satisfy the
ˆ
¬rst two conditions in (8.62), while those with superscript “1” must ful¬ll
the remaining two conditions. Solving the (4—4) associated system, we get
(0) (1)
•0 (ˆ) = 1 ’ 3ˆ2 + 2ˆ3 , •0 (ˆ) = x ’ 2ˆ2 + x3 ,
ˆx x x ˆx ˆ x ˆ
(8.64)
(0) (1)
= 3ˆ ’ 2ˆ , = ’ˆ + x .
2 3 2 3
•1 (ˆ)
ˆx x x •1 (ˆ)
ˆx x ˆ

The graphs of the functions (8.64) are drawn in Figure 8.17 (left), where
(0) (0) (1) (1)
(0), (1), (2) and (3) denote •0 , •1 , •0 and •1 , respectively.
ˆ ˆ ˆ ˆ
The function uh ∈ Vh can be written as
K’1 K’1
(1)
uh (x) = ui •i (x) + ui ψi (x). (8.65)
i=1 i=1

The coe¬cients and the degrees of freedom of uh have the following mean-
(1)
ing: ui = uh (xi ), ui (xi ) = uh (xi ) for i = 1, . . . , K ’ 1. Notice that (8.65)
is a special instance of (8.32), having set mi = 1.
The discretization of problem (8.60) reads
L L

¬nd uh ∈ Vh such that ∀vh ∈ Bh . (8.66)
±uh vh dx = f vh dx,
0 0

This is called the Galerkin ¬nite element approximation of the di¬erential
problem (8.59). We refer to Chapter 12, Sections 12.4 and 12.4.5, for a
more comprehensive discussion and analysis of the method.
8.8 Applications 365

5
1 10


(0) (1)
0.8 0
10


0.6
’5
10

0.4
’10
10
0.2
(2)
Prec. No Prec.
’15
10
0
(3)
’20
’0.2 10
0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 120 140 160




FIGURE 8.17. Canonical Hermite basis on the reference interval 0 ¤ x ¤ 1 (left);
ˆ
convergence histories for the conjugate gradient method in the solution of system
(8.69) (right). On the x-axis the number of iterations k is shown, while the y-axis
represents the quantity r(k) 2 / b1 2 , where r is the residual of system (8.69)

Using the representation (8.65) we end up with the following system in
(1) (1) (1)
the 2K ’ 2 unknowns u1 , u2 , . . . , uK’1 , u1 , u2 , . . . uK’1
± 
±
 K’1  L L L



 (1)
 uj ±•j •i dx + uj ±ψj •i dx = f •i dx,
  

j=1
± 0L  0L
0 (8.67)
L
 K’1  


 (1)
 uj ±•j ψi dx + uj ±ψj ψi dx = f ψi dx,

  
j=1 0 0 0

for i = 1, . . . , K ’1. Assuming, for the sake of simplicity, that the beam has
unit length L, that ± and f are two constants and computing the integrals
in (8.67), the ¬nal system reads in matrix form
Au + Bp = b1
(8.68)
BT u + Cp = 0,
(1)
where the vectors u, p ∈ RK’1 contain the nodal unknowns ui and ui ,
b1 ∈ RK’1 is the vector of components equal to h4 f /±, while
A = tridiagK’1 (’12, 24, ’12),
B = tridiagK’1 (’6, 0, 6),
C = tridiagK’1 (2, 8, 2).

System (8.68) has size equal to 2(K ’ 1); eliminating the unknown p from
the second equation, we get the reduced system (of size K ’ 1)
A ’ BC’1 BT u = b1 . (8.69)
366 8. Polynomial Interpolation

Since B is skew-symmetric and A is symmetric and positive de¬nite (s.p.d.),
the matrix M = A ’ BC’1 BT is s.p.d. too. Using Cholesky factorization for
solving system (8.69) is impractical as C’1 is full. An alternative is thus the
conjugate gradient method (CG) supplied with a suitable preconditioner
as the spectral condition number of M is of the order of h’4 = K 4 .
We notice that computing the residual at each step k ≥ 0 requires solv-
ing a linear system whose right side is the vector BT u(k) , u(k) being the
current iterate of CG method, and whose coe¬cient matrix is matrix C.
This system can be solved using the Thomas algorithm (3.53) with a cost
of the order of K ¬‚ops.
The CG algorithm terminates in correspondence to the lowest value of k
for which r(k) 2 ¤ u b1 2 , where r(k) is the residual of system (8.69) and
u is the roundo¬ unit.
The results obtained running the CG method in the case of a uniform
partition of [0, 1] with K = 50 elements and setting ± = f = 1 are sum-
marized in Figure 8.17 (right), which shows the convergence histories of
the method in both nonpreconditioned form (denoted by “Non Prec.”) and
with SSOR preconditioner (denoted by “Prec.”), having set the relaxation
parameter ω = 1.95.
We notice that the CG method does not converge within K ’ 1 steps
due to the e¬ect of the rounding errors. Notice also the e¬ectiveness of the
SSOR preconditioner in terms of the reduction of the number of iterations.
However, the high computational cost of this preconditioner prompts us to
devise another choice. Looking at the structure of the matrix M a natural
preconditioner is M = A ’ BC’1 BT , where C is the diagonal matrix whose
K’1
entries are cii = j=1 |cij |. The matrix M is banded so that its inversion
requires a strongly reduced cost than for the SSOR preconditioner. More-
over, as shown in Table 8.6, using M provides a dramatic decrease of the
number of iterations to converge.

M
K Without Precond. SSOR
25 51 27 12
50 178 61 25
100 685 118 33
200 2849 237 34
TABLE 8.6. Number of iterations as a function of K




8.8.2 Geometric Reconstruction Based on Computer
Tomographies
A typical application of the algorithms presented in Section 8.7 deals with
the reconstruction of the three-dimensional structure of internal organs of
human body based on computer tomographies (CT).
8.8 Applications 367




FIGURE 8.18. Cross-section of a blood vessel (left) and an associated character-
istic polygon using 16 points Pi (right)




The CT usually provides a sequence of images which represent the sections
of an organ at several horizontal planes; as a convention, we say that the
CT produces sections of the x, y plane in correspondance of several values of
z. The result is analogous to what we would get by sectioning the organ at
di¬erent values of z and taking the picture of the corresponding sections.
Obviously, the great advantage in using the CT is that the organ under
investigation can be visualized without being hidden by the neighboring
ones, as happens in other kinds of medical images, e.g., angiographies.
The image that is obtained for each section is coded into a matrix of
pixels (abbreviation of pictures elements) in the x, y plane; a certain value
is associated with each pixel expressing the level of grey of the image at
that point. This level is determined by the density of X rays which are
collected by a detector after passing through the human body. In practice,
the information contained in a CT at a given value of z is expressed by a
set of points (xi , yi ) which identify the boundary of the organ at z.
To improve the diagnostics it is often useful to reconstruct the three-
dimensional structure of the organ under examination starting from the
sections provided by the CT. With this aim, it is necessary to convert the
information coded by pixels into a parametric representation which can be
expressed by suitable functions interpolating the image at some signi¬cant
points on its boundary. This reconstruction can be carried out by using the
methods described in Section 8.7 as shown in Figure 8.19.
A set of curves like those shown in Figure 8.19 can be suitably stacked to
provide an overall three-dimensional view of the organ under examination.
368 8. Polynomial Interpolation




(c)
(a)
(b)




FIGURE 8.19. Reconstruction of the internal vessel of Figure 8.18 using di¬erent
interpolating splines with the same characteristic polygon: (a) B´zier curves, (b)
e
parametric splines and (c) parametric B-splines


8.9 Exercises
1. Prove that the characteristic polynomials li ∈ Pn de¬ned in (8.3) form a
basis for Pn .
2. An alternative approach to the method in Theorem 8.1, for constructing
the interpolating polynomial, consists of directly enforcing the n + 1 in-
terpolation constraints on Πn and then computing the coe¬cients ai . By
doing so, we end up with a linear system Xa= y, with a = (a0 , . . . , an )T ,
y = (y0 , . . . , yn )T and X = [xj ]. X is called Vandermonde matrix. Prove
i
that X is nonsingular if the nodes xi are distinct.

(xi ’ xj ) by recursion on n.]
[Hint: show that det(X)=
0¤j<i¤n

n
(xi ’ xj ) where ωn+1 is the nodal polynomial
3. Prove that ωn+1 (xi ) =
j=0
j=i
(8.6). Then, check (8.5).
4. Provide an estimate of ωn+1 ∞ , in the cases n = 1 and n = 2, for a
distribution of equally spaced nodes.
5. Prove that

(n ’ 1)!hn’1 |(x ’ xn’1 )(x ’ xn )| ¤ |ωn+1 (x)| ¤ n!hn’1 |(x ’ xn’1 )(x ’ xn )|,

where n is even, ’1 = x0 < x1 < . . . < xn’1 < xn = 1, x ∈ (xn’1 , xn ) and
h = 2/n.
8.9 Exercises 369

[Hint : let N = n/2 and show ¬rst that

ωn+1 (x) = (x + N h)(x + (N ’ 1)h) . . . (x + h)x
(8.70)
(x ’ h) . . . (x ’ (N ’ 1)h)(x ’ N h).

Then, take x = rh with N ’ 1 < r < N .]
6. Under the assumptions of Exercise 5, show that |ωn+1 | is maximum if
x ∈ (xn’1 , xn ) (notice that |ωn+1 | is an even function).
[Hint : use (8.70) to prove that |ωn+1 (x + h)/ωn+1 (x)| > 1 for any x ∈
(0, xn’1 ) with x not coinciding with any interpolation node.]
7. Prove the recursive relation (8.19) for Newton divided di¬erences.
8. Determine an interpolating polynomial Hf ∈ Pn such that

(Hf )(k) (x0 ) = f (k) (x0 ), k = 0, . . . , n,

and check that
n
f (j) (x0 )
(x ’ x0 )j ,
Hf (x) =
j!
j=0


that is, the Hermite interpolating polynomial on one node coincides with
the Taylor polynomial.
9. Given the following set of data

f0 = f (’1) = 1, f1 = f (’1) = 1, f2 = f (1) = 2, f3 = f (2) = 1 ,

prove that the Hermite-Birko¬ interpolating polynomial H3 does not exist
for them.
[Solution : letting H3 (x) = a3 x3 + a2 x2 + a1 x + a0 , one must check that
the matrix of the linear system H3 (xi ) = fi for i = 0, . . . , 3 is singular.]
10. Check that any sk ∈ Sk [a, b] admits a representation of the form
g
k
ci (x ’ xi )k ,
i
sk (x) = bi x + +
i=0 i=1


that is, 1, x, x2 , . . . , xk , (x ’ x1 )k , . . . , (x ’ xg )k form a basis for Sk [a, b].
+ +

11. Prove Property 8.2 and check its validity even in the case where the spline
s satis¬es conditions of the form s (a) = f (a), s (b) = f (b).
[Hint: start from
xi
b n
f (x) ’ s (x) s (x)dx = f (x) ’ s (x) s dx
i=1
a xi’1


and integrate by parts twice.]
370 8. Polynomial Interpolation

x2 x4 x6
12. Let f (x) = cos(x) = 1 ’ ’
+ + . . . ; then, consider the following
2! 4! 6!
rational approximation

a0 + a2 x2 + a4 x4
r(x) = , (8.71)
1 + b2 x2
called the Pad´ approximation. Determine the coe¬cients of r in such a
e
way that

f (x) ’ r(x) = γ8 x8 + γ10 x10 + . . .

[Solution: a0 = 1, a2 = ’7/15, a4 = 1/40, b2 = 1/30.]
13. Assume that the function f of the previous exercise is known at a set of n
equally spaced points xi ∈ (’π/2, π/2) with i = 0, . . . , n. Repeat Exercise
12, determining, by using MATLAB, the coe¬cients of r in such a way
i=0 |f (xi ) ’ r(xi )| is minimized. Consider the cases
n 2
that the quantity
n = 5 and n = 10.
9
Numerical Integration




In this chapter we present the most commonly used methods for numer-
ical integration. We will mainly consider one-dimensional integrals over
bounded intervals, although in Sections 9.8 and 9.9 an extension of the tech-
niques to integration over unbounded intervals (or integration of functions
with singularities) and to the multidimensional case will be considered.



9.1 Quadrature Formulae
Let f be a real integrable function over the interval [a, b]. Computing ex-
b
plicitly the de¬nite integral I(f ) = a f (x)dx may be di¬cult or even
impossible. Any explicit formula that is suitable for providing an approxi-
mation of I(f ) is said to be a quadrature formula or numerical integration
formula.
An example can be obtained by replacing f with an approximation fn ,
depending on the integer n ≥ 0, then computing I(fn ) instead of I(f ).
Letting In (f ) = I(fn ), we have

b

n ≥ 0.
In (f ) = fn (x)dx, (9.1)
a

The dependence on the end points a, b is always understood, so we write
In (f ) instead of In (f ; a, b).
372 9. Numerical Integration

If f ∈ C 0 ([a, b]), the quadrature error En (f ) = I(f ) ’ In (f ) satis¬es
b

|En (f )| ¤ |f (x) ’ fn (x)|dx ¤ (b ’ a) f ’ fn ∞.
a

Therefore, if for some n, f ’ fn ∞ < µ, then |En (f )| ¤ µ(b ’ a).
The approximant fn must be easily integrable, which is the case if, for
example, fn ∈ Pn . In this respect, a natural approach consists of using
fn = Πn f , the interpolating Lagrange polynomial of f over a set of n + 1
distinct nodes {xi }, with i = 0, . . . , n. By doing so, from (9.1) it follows
that
b
n
In (f ) = f (xi ) li (x)dx, (9.2)
i=0 a

where li is the characteristic Lagrange polynomial of degree n associated
with node xi (see Section 8.1). We notice that (9.2) is a special instance of
the following quadrature formula
n
In (f ) = ±i f (xi ), (9.3)
i=0

b
where the coe¬cients ±i of the linear combination are given by a li (x)dx.
Formula (9.3) is a weighted sum of the values of f at the points xi , for
i = 0, . . . , n. These points are said to be the nodes of the quadrature
formula, while the numbers ±i ∈ R are its coe¬cients or weights. Both
weights and nodes depend in general on n; again, for notational simplicity,
this dependence is always understood.
Formula (9.2), called the Lagrange quadrature formula, can be generalized
to the case where also the values of the derivative of f are available. This
leads to the Hermite quadrature formula (see Section 9.5)
1 n
±ik f (k) (xi )
In (f ) = (9.4)
k=0 i=0

where the weights are now denoted by ±ik .
Both (9.2) and (9.4) are interpolatory quadrature formulae, since the
function f has been replaced by its interpolating polynomial (Lagrange
and Hermite polynomials, respectively). We de¬ne the degree of exactness
of a quadrature formula as the maximum integer r ≥ 0 for which

∀f ∈ Pr .
In (f ) = I(f ),

Any interpolatory quadrature formula that makes use of n + 1 distinct
nodes has degree of exactness equal to at least n. Indeed, if f ∈ Pn , then
9.2 Interpolatory Quadratures 373

Πn f = f and thus In (Πn f ) = I(Πn f ). The converse statement is also true,
that is, a quadrature formula using n + 1 distinct nodes and having degree
of exactness equal at least to n is necessarily of interpolatory type (for the
proof see [IK66], p. 316).
As we will see in Section 10.2, the degree of exactness of a Lagrange
quadrature formula can be as large as 2n + 1 in the case of the so-called
Gaussian quadrature formulae.


9.2 Interpolatory Quadratures
We consider three remarkable instances of formula (9.2), corresponding to
n = 0, 1 and 2.


9.2.1 The Midpoint or Rectangle Formula
This formula is obtained by replacing f over [a, b] with the constant function
equal to the value attained by f at the midpoint of [a, b] (see Figure 9.1,
left). This yields

a+b
I0 (f ) = (b ’ a)f (9.5)
2

with weight ±0 = b ’ a and node x0 = (a + b)/2. If f ∈ C 2 ([a, b]), the
quadrature error is

b’a
h3
(9.6)
E0 (f ) = f (ξ), h = ,
3 2
where ξ lies within the interval (a, b).
f (x) f (x)




x x
xm’1
x0
a x0 b xk
FIGURE 9.1. The midpoint formula (left); the composite midpoint formula
(right)

Indeed, expanding f in a Taylor™s series around c = (a + b)/2 and trun-
cating at the second-order, we get

f (x) = f (c) + f (c)(x ’ c) + f (·(x))(x ’ c)2 /2,
374 9. Numerical Integration

from which, integrating on (a, b) and using the mean-value theorem, (9.6)
follows. From this, it turns out that (9.5) is exact for constant and a¬ne
functions (since in both cases f (ξ) = 0 for any ξ ∈ (a, b)), so that the
midpoint rule has degree of exactness equal to 1.
It is worth noting that if the width of the integration interval [a, b] is
not su¬ciently small, the quadrature error (9.6) can be quite large. This
drawback is common to all the numerical integration formulae that will
be described in the three forthcoming sections and can be overcome by
resorting to their composite counterparts as discussed in Section 9.4.
Suppose now that we approximate the integral I(f ) by replacing f over
[a, b] with its composite interpolating polynomial of degree zero, constructed
on m subintervals of width H = (b ’ a)/m, for m ≥ 1 (see Figure 9.1,
right). Introducing the quadrature nodes xk = a + (2k + 1)H/2, for k =
0, . . . , m ’ 1, we get the composite midpoint formula
m’1
m ≥ 1.
I0,m (f ) = H f (xk ), (9.7)
k=0

The quadrature error E0,m (f ) = I(f ) ’ I0,m (f ) is given by
b’a 2 b’a
(9.8)
E0,m (f ) = H f (ξ), H =
24 m
provided that f ∈ C 2 ([a, b]) and where ξ ∈ (a, b). From (9.8) we conclude
that (9.7) has degree of exactness equal to 1; (9.8) can be proved by recalling
(9.6) and using the additivity of integrals. Indeed, for k = 0, . . . , m ’ 1 and
ξk ∈ (a + kH, a + (k + 1)H),
m’1 m’1
H2 b ’ a b’a 2
3
E0,m (f ) = f (ξk )(H/2) /3 = f (ξk ) = H f (ξ).
24 m 24
k=0 k=0

The last equality is a consequence of the following theorem, that is applied
letting u = f and δj = 1 for j = 0, . . . , m ’ 1.

Theorem 9.1 (discrete mean-value theorem) Let u ∈ C 0 ([a, b]) and
let xj be s + 1 points in [a, b] and δj be s + 1 constants, all having the same
sign. Then there exists · ∈ [a, b] such that
s s
δj u(xj ) = u(·) δj . (9.9)
j=0 j=0

¯
Proof. Let um = minx∈[a,b] u(x) = u(¯) and uM = maxx∈[a,b] u(x) = u(x),
x
¯
where x and x are two points in (a, b). Then
¯
s s s
δj ¤ δj u(xj ) ¤ uM
um δj . (9.10)
j=0 j=0 j=0
9.2 Interpolatory Quadratures 375

Let σs = s δj u(xj ) and consider the continuous function U (x) = u(x) s δj .
j=0 j=0
Thanks to (9.10), U (¯) ¤ σs ¤ U (x). Applying the mean-value theorem, there
¯
x
exists a point · between a and b such that U (·) = σs , which is (9.9). A similar
3
proof can be carried out if the coe¬cients δj are negative.

The composite midpoint formula is implemented in Program 71. Through-
out this chapter, we shall denote by a and b the end points of the integration
interval and by m the number of quadrature subintervals. The variable fun
contains the expression of the function f , while the output variable int
contains the value of the approximate integral.

Program 71 - midpntc : Midpoint composite formula

function int = midpntc(a,b,m,fun)
h=(b-a)/m; x=[a+h/2:h:b]; dim = max(size(x)); y=eval(fun);
if size(y)==1, y=diag(ones(dim))*y; end; int=h*sum(y);




9.2.2 The Trapezoidal Formula
This formula is obtained by replacing f with Π1 f , its Lagrange interpolat-
ing polynomial of degree 1, relative to the nodes x0 = a and x1 = b (see
Figure 9.2, left). The resulting quadrature, having nodes x0 = a, x1 = b
and weights ±0 = ±1 = (b ’ a)/2, is

b’a
I1 (f ) = [f (a) + f (b)] . (9.11)
2

If f ∈ C 2 ([a, b]), the quadrature error is given by

h3
E1 (f ) = ’ f (ξ), h = b ’ a (9.12)
12

where ξ is a point within the integration interval.

f (x) f (x)




x
x
a+b
a = x0 b = x1 a = x0 = x1 b = x2
2

FIGURE 9.2. Trapezoidal formula (left) and Cavalieri-Simpson formula (right)
376 9. Numerical Integration

Indeed, from the expression of the interpolation error (8.7) one gets

b b
1
(f (x) ’ Π1 f (x))dx = ’ f (ξ(x))(x ’ a)(b ’ x)dx.
E1 (f ) =
2
a a


Since ω2 (x) = (x ’ a)(x ’ b) < 0 in (a, b), the mean-value theorem yields

b

E1 (f ) = (1/2)f (ξ) ω2 (x)dx = ’f (ξ)(b ’ a)3 /12,
a


for some ξ ∈ (a, b), which is (9.12). The trapezoidal quadrature therefore
has degree of exactness equal to 1, as is the case with the midpoint rule.
To obtain the composite trapezoidal formula, we proceed as in the case
where n = 0, by replacing f over [a, b] with its composite Lagrange polyno-
mial of degree 1 on m subintervals, with m ≥ 1. Introduce the quadrature
nodes xk = a + kH, for k = 0, . . . , m and H = (b ’ a)/m, getting

m’1
H
m ≥ 1.
I1,m (f ) = (f (xk ) + f (xk+1 )) , (9.13)
2
k=0


Each term in (9.13) is counted twice, except the ¬rst and the last one, so
that the formula can be written as

1 1
I1,m (f ) = H f (x0 ) + f (x1 ) + . . . + f (xm’1 ) + f (xm ) . (9.14)
2 2

As was done for (9.8), it can be shown that the quadrature error associated
with (9.14) is

b’a 2
E1,m (f ) = ’ H f (ξ),
12

provided that f ∈ C 2 ([a, b]), where ξ ∈ (a, b). The degree of exactness is
again equal to 1.

The composite trapezoidal rule is implemented in Program 72.

Program 72 - trapezc : Composite trapezoidal formula

function int = trapezc(a,b,m,fun)
h=(b-a)/m; x=[a:h:b]; dim = max(size(x)); y=eval(fun);
if size(y)==1, y=diag(ones(dim))*y; end;
int=h*(0.5*y(1)+sum(y(2:m))+0.5*y(m+1));
9.2 Interpolatory Quadratures 377

9.2.3 The Cavalieri-Simpson Formula
The Cavalieri-Simpson formula can be obtained by replacing f over [a, b]
with its interpolating polynomial of degree 2 at the nodes x0 = a, x1 =
(a + b)/2 and x2 = b (see Figure 9.2, right). The weights are given by
±0 = ±2 = (b ’ a)/6 and ±1 = 4(b ’ a)/6, and the resulting formula reads

b’a a+b
I2 (f ) = f (a) + 4f + f (b) . (9.15)
6 2
It can be shown that the quadrature error is

b’a
h5 (4)
E2 (f ) = ’ (9.16)
f (ξ), h =
90 2
provided that f ∈ C 4 ([a, b]), and where ξ lies within (a, b). From (9.16) it
turns out that (9.15) has degree of exactness equal to 3.
Replacing f with its composite polynomial of degree 2 over [a, b] yields
the composite formula corresponding to (9.15). Introducing the quadrature
nodes xk = a + kH/2, for k = 0, . . . , 2m and letting H = (b ’ a)/m, with
m ≥ 1 gives
m’1 m’1
H
I2,m = f (x0 ) + 2 f (x2r ) + 4 f (x2s+1 ) + f (x2m ) . (9.17)
6 r=1 s=0

The quadrature error associated with (9.17) is
b’a
E2,m (f ) = ’ (H/2)4 f (4) (ξ),
180
provided that f ∈ C 4 ([a, b]) and where ξ ∈ (a, b); the degree of exactness
of the formula is 3.
The composite Cavalieri-Simpson quadrature is implemented in Program
73.

Program 73 - simpsonc : Composite Cavalieri-Simpson formula
function int = simpsonc(a,b,m,fun)
h=(b-a)/m; x=[a:h/2:b]; dim = max(size(x)); y=eval(fun);
if size(y)==1, y=diag(ones(dim))*y; end;
int=(h/6)*(y(1)+2*sum(y(3:2:2*m-1))+4*sum(y(2:2:2*m))+y(2*m+1));


Example 9.1 Let us employ the midpoint, trapezoidal and Cavalieri-Simpson
composite formulae to compute the integral

3(e’2π ’ 1) ’ 10πe’2π
’x
’0.122122.
xe cos(2x)dx = (9.18)
25
0
378 9. Numerical Integration

Table 9.1 shows in even columns the behavior of the absolute value of the er-
ror when halving H (thus, doubling m), while in odd columns the ratio Rm =
|Em |/|E2m | between two consecutive errors is given. As predicted by the previous
theoretical analysis, Rm tends to 4 for the midpoint and trapezoidal rules and

to 16 for the Cavalieri-Simpson formula.


|E0,m | Rm |E1,m | Rm |E2,m | Rm
m
1 0.9751 1.589e-01 7.030e-01
2 1.037 0.9406 0.5670 0.2804 0.5021 1.400
3.139 · 10’3
4 0.1221 8.489 0.2348 2.415 159.96
2.980 · 10’2 5.635 · 10’2 1.085 · 10’3
8 4.097 4.167 2.892
6.748 · 10’3 1.327 · 10’2 7.381 · 10’5
16 4.417 4.245 14.704
1.639 · 10’3 3.263 · 10’3 4.682 · 10’6
32 4.118 4.068 15.765
4.066 · 10’4 8.123 · 10’4 2.936 · 10’7
64 4.030 4.017 15.946
1.014 · 10’4 2.028 · 10’4 1.836 · 10’8
128 4.008 4.004 15.987
2.535 · 10’5 5.070 · 10’5 1.148 · 10’9
256 4.002 4.001 15.997
TABLE 9.1. Absolute error for midpoint, trapezoidal and Cavalieri-Simpson com-
posite formulae in the approximate evaluation of integral (9.18)




9.3 Newton-Cotes Formulae
These formulae are based on Lagrange interpolation with equally spaced
nodes in [a, b]. For a ¬xed n ≥ 0, let us denote the quadrature nodes
by xk = x0 + kh, k = 0, . . . , n. The midpoint, trapezoidal and Simpson
formulae are special instances of the Newton-Cotes formulae, taking n = 0,
n = 1 and n = 2 respectively. In the general case, we de¬ne:
b’a
(n ≥ 1);
- closed formulae, those where x0 = a, xn = b and h =
n
b’a
(n ≥ 0).
- open formulae, those where x0 = a+h, xn = b’h and h =
n+2
A signi¬cant property of the Newton-Cotes formulae is that the quadra-
ture weights ±i depend explicitly only on n and h, but not on the integration
interval [a, b]. To check this property in the case of closed formulae, let us
introduce the change of variable x = Ψ(t) = x0 + th. Noting that Ψ(0) = a,
Ψ(n) = b and xk = a + kh, we get
x ’ xk a + th ’ (a + kh) t’k
= = .
xi ’ xk a + ih ’ (a + kh) i’k
Therefore, if n ≥ 1
n
t’k
0 ¤ i ¤ n.
li (x) = = •i (t),
i’k
k=0,k=i
9.3 Newton-Cotes Formulae 379

The following expression for the quadrature weights is obtained
b n n

±i = li (x)dx = •i (t)hdt = h •i (t)dt,
a 0 0

from which we get the formula
n
n
In (f ) = h wi f (xi ), wi = •i (t)dt.
i=0 0

Open formulae can be interpreted in a similar manner. Actually, using again
the mapping x = Ψ(t), we get x0 = a + h, xn = b ’ h and xk = a + h(k + 1)
for k = 1, . . . , n ’ 1. Letting, for sake of coherence, x’1 = a, xn+1 = b and
n+1
proceeding as in the case of closed formulae, we get ±i = h ’1 •i (t)dt,
and thus
n+1
n
In (f ) = h wi f (xi ), wi = •i (t)dt.
i=0 ’1

In the special case where n = 0, since l0 (x) = •0 (t) = 1, we get w0 = 2.
The coe¬cients wi do not depend on a, b, h and f , but only depend on n,
and can therefore be tabulated a priori. In the case of closed formulae, the
polynomials •i and •n’i , for i = 0, . . . , n ’ 1, have by symmetry the same
integral, so that also the corresponding weights wi and wn’i are equal for
i = 0, . . . , n ’ 1. In the case of open formulae, the weights wi and wn’i are
equal for i = 0, . . . , n. For this reason, we show in Table 9.2 only the ¬rst
half of the weights.
Notice the presence of negative weights in open formulae for n ≥ 2. This can
be a source of numerical instability, in particular due to rounding errors.

n 1 2 3 4 5 6 n 0 1 2 3 4 5
1 1 3 14 95 41 3 8 55 66 4277
w0 w0 2
2 3 8 45 288 140 2 3 24 20 1440
4 9 64 375 216
’4 ’ 84 ’ 3171
5
w1 0 w1 0 0
3 8 45 288 140 3 24 20 1440
24 250 27 156 3934
w2 0 0 0 w2 0 0 0 0
45 288 140 20 1440
272
w3 0 0 0 0 0 140

TABLE 9.2. Weights of closed (left) and open Newton-Cotes formulae (right)

Besides its degree of exactness, a quadrature formula can also be quali¬ed
by its order of in¬nitesimal with respect to the integration stepsize h, which
is de¬ned as the maximum integer p such that |I(f ) ’ In (f )| = O(hp ).
Regarding this, the following result holds
380 9. Numerical Integration

Theorem 9.2 For any Newton-Cotes formula corresponding to an even
value of n, the following error characterization holds

Mn
hn+3 f (n+2) (ξ),
En (f ) = (9.19)
(n + 2)!

provided f ∈ C n+2 ([a, b]), where ξ ∈ (a, b) and
± n



 t πn+1 (t)dt < 0 for closed formulae,



0
Mn =
 n+1



 t πn+1 (t)dt > 0 for open formulae,


’1

n
having de¬ned πn+1 (t) = i=0 (t ’ i). From (9.19), it turns out that the
degree of exactness is equal to n + 1 and the order of in¬nitesimal is n + 3.

Similarly, for odd values of n, the following error characterization holds

Kn
hn+2 f (n+1) (·),
En (f ) = (9.20)
(n + 1)!

provided f ∈ C n+1 ([a, b]), where · ∈ (a, b) and
± n



 πn+1 (t)dt < 0 for closed formulae,



0
Kn =
 n+1



 πn+1 (t)dt > 0 for open formulae.


’1


The degree of exactness is thus equal to n and the order of in¬nitesimal is
n + 2.

Proof. We give a proof in the particular case of closed formulae with n even,
referring to [IK66], pp. 308-314, for a complete demonstration of the theorem.
Thanks to (8.20), we have

b

En (f ) = I(f ) ’ In (f ) = f [x0 , . . . , xn , x]ωn+1 (x)dx. (9.21)
a

x
Set W (x) = a ωn+1 (t)dt. Clearly, W (a) = 0; moreover, ωn+1 (t) is an odd func-
tion with respect to the midpoint (a + b)/2 so that W (b) = 0. Integrating by
9.3 Newton-Cotes Formulae 381

parts (9.21) we get
b b
d
f [x0 , . . . , xn , x]W (x)dx = ’
En (f ) = f [x0 , . . . , xn , x]W (x)dx
dx
a a
b
(n+2)
f (ξ(x))

= W (x)dx.
(n + 2)!
a

In deriving the formula above we have used the following identity (see Exercise
4)
d
f [x0 , . . . , xn , x] = f [x0 , . . . , xn , x, x]. (9.22)
dx
Since W (x) > 0 for a < x < b (see [IK66], p. 309), using the mean-value theorem
we obtain
b bx
f (n+2) (ξ) f (n+2) (ξ)
En (f ) = ’ W (x)dx = ’ ωn+1 (t) dt dx (9.23)
(n + 2)! (n + 2)!
a aa

where ξ lies within (a, b). Exchanging the order of integration, letting s = x0 +„ h,
for 0 ¤ „ ¤ n, and recalling that a = x0 , b = xn , yields
b bb

(s ’ x0 ) . . . (s ’ xn )dxds
W (x)dx =
a as
xn

(s ’ x0 ) . . . (s ’ xn’1 )(s ’ xn )(xn ’ s)ds
=
x0
n

’hn+3 „ („ ’ 1) . . . („ ’ n + 1)(„ ’ n)2 d„.
=
0

Finally, letting t = n ’ „ and combining this result with (9.23), we get (9.19). 3

Relations (9.19) and (9.20) are a priori estimates for the quadrature error
(see Chapter 2, Section 2.3). Their use in generating a posteriori estimates
of the error in the frame of adaptive algorithms will be examined in Section
9.7.
In the case of closed Newton-Cotes formulae, we show in Table 9.3, for
1 ¤ n ¤ 6, the degree of exactness (that we denote henceforth by rn ) and
the absolute value of the constant Mn = Mn /(n + 2)! (if n is even) or
Kn = Kn /(n + 1)! (if n is odd).

Example 9.2 The purpose of this example is to assess the importance of the
regularity assumption on f for the error estimates (9.19) and (9.20). Consider
the closed Newton-Cotes formulae, for 1 ¤ n ¤ 6, to approximate the integral
1 5/2
x dx = 2/7 0.2857. Since f is only C 2 ([0, 1]), we do not expect a substan-
0
tial increase of the accuracy as n gets larger. Actually, this is con¬rmed by Table
9.4, where the results obtained by running Program 74 are reported.
382 9. Numerical Integration

Mn Kn Mn Kn Mn Kn
n rn n rn n rn

1 3 275
1 1 3 3 5 5
12 80 12096
1 8 9
2 3 4 5 6 7
90 945 1400

TABLE 9.3. Degree of exactness and error constants for closed Newton-Cotes
formulae

c
For n = 1, . . . , 6, we have denoted by En (f ) the module of the absolute error,
c s
by qn the computed order of in¬nitesimal and by qn the corresponding theoretical
value predicted by (9.19) and (9.20) under optimal regularity assumptions for f .
As is clearly seen, qn is de¬nitely less than the potential theoretical value qn . •
c s




c c s c c s
n En (f ) qn qn n En (f ) qn qn
5.009 · 10’5
1 0.2143 3 3 4 4.7 7
1.196 · 10’3 3.189 · 10’5
2 3.2 5 5 2.6 7
5.753 · 10’4 7.857 · 10’6
3 3.8 5 6 3.7 9
1
x5/2 dx
TABLE 9.4. Error in the approximation of 0




Example 9.3 From a brief analysis of error estimates (9.19) and (9.20), we could
be led to believe that only non-smooth functions can be a source of trouble when
dealing with Newton-Cotes formulae. Thus, it is a little surprising to see results
like those in Table 9.5, concerning the approximation of the integral
5
1
I(f ) = dx = 2 arctan 5 2.747, (9.24)
1 + x2
’5


where f (x) = 1/(1 + x2 ) is Runge™s function (see Section 8.1.2), which belongs to
C ∞ (R). The results clearly demonstrate that the error remains almost unchanged
as n grows. This is due to the fact that singularities on the imaginary axis may
also a¬ect the convergence properties of a quadrature formula. This is indeed the

case with the function at hand, which exhibits two singularities at ± ’1 (see

[DR75], pp. 64-66).


n En (f ) n En (f ) n En (f )
1 0.8601 3 0.2422 5 0.1599
2 -1.474 4 0.1357 6 -0.4091
TABLE 9.5. Relative error En (f ) = [I(f ) ’ In (f )]/In (f ) in the approximate
evaluation of (9.24) using closed Newton-Cotes formulae

To increase the accuracy of an interpolatory quadrature rule, it is by
no means convenient to increase the value of n. By doing so, the same
9.4 Composite Newton-Cotes Formulae 383

drawbacks of Lagrange interpolation on equally spaced nodes would arise.
For example, the weights of the closed Newton-Cotes formula with n = 8
do not have the same sign (see Table 9.6 and recall that wi = wn’i for
i = 0, . . . , n ’ 1).

n w0 w1 w2 w3 w4 rn Mn
3712 18160
’ ’
3956 23552 41984 2368
8 9
14175 14175 14175 467775
14175 14175
TABLE 9.6. Weights of the closed Newton-Cotes formula with 9 nodes

This can give rise to numerical instabilities, due to rounding errors (see
Chapter 2), and makes this formula useless in the practice, as happens for
all the Newton-Cotes formulae using more than 8 nodes. As an alternative,
one can resort to composite formulae, whose error analysis is addressed in
Section 9.4, or to Gaussian formulae, which will be dealt with in Chapter
10 and which yield maximum degree of exactness with a non equally spaced
nodes distribution.

The closed Newton-Cotes formulae, for 1 ¤ n ¤ 6, are implemented in
Program 74.

Program 74 - newtcot : Closed Newton-Cotes formulae

function int = newtcot(a,b,n,fun)
h=(b-a)/n; n2=¬x(n/2);
if n > 6, disp(™maximum value of n equal to 6 ™); return; end
a03=1/3; a08=1/8; a45=1/45; a288=1/288; a140=1/140;
alpha=[0.5 0 0 0; ...
a03 4*a03 0 0; ...
3*a08 9*a08 0 0; ...
14*a45 64*a45 24*a45 0; ...
95*a288 375*a288 250*a288 0; ...
41*a140 216*a140 27*a140 272*a140];
x=a; y(1)=eval(fun);
for j=2:n+1, x=x+h; y(j)=eval(f); end; int=0;
for j=1:n2+1, int=int+y(j)*alpha(n,j); end;
for j=n2+2:n+1, int=int+y(j)*alpha(n,n-j+2); end; int=int*h;




9.4 Composite Newton-Cotes Formulae
The examples of Section 9.2 have already pointed out that composite
Newton-Cotes formulae can be constructed by replacing f with its com-
posite Lagrange interpolating polynomial, introduced in Section 8.1.
384 9. Numerical Integration

The general procedure consists of partitioning the integration interval
[a, b] into m subintervals Tj = [yj , yj+1 ] such that yj = a + jH, where H =
(b ’ a)/m for j = 0, . . . , m. Then, over each subinterval, an interpolatory
(j) (j)
formula with nodes {xk , 0 ¤ k ¤ n} and weights {±k , 0 ¤ k ¤ n} is
used. Since
b m’1
I(f ) = f (x)dx = f (x)dx,
j=0 T
a j


a composite interpolatory quadrature formula is obtained by replacing I(f )
with
m’1 n
(j) (j)
In,m (f ) = ±k f (xk ). (9.25)
j=0 k=0

The quadrature error is de¬ned as En,m (f ) = I(f ) ’ In,m (f ). In particular,
over each subinterval Tj one can resort to a Newton-Cotes formula with
(j)
n + 1 equally spaced nodes: in such a case, the weights ±k = hwk are still
independent of Tj .
Using the same notation as in Theorem 9.2, the following convergence
result holds for composite formulae.

Theorem 9.3 Let a composite Newton-Cotes formula, with n even, be
used. If f ∈ C n+2 ([a, b]), then
b’a Mn
H n+2 f (n+2) (ξ)
En,m (f ) = (9.26)
(n + 2)! (n + 2)n+3
where ξ ∈ (a, b). Therefore, the quadrature error is an in¬nitesimal in H
of order n + 2 and the formula has degree of exactness equal to n + 1.
For a composite Newton-Cotes formula, with n odd, if f ∈ C n+1 ([a, b])
b ’ a Kn n+1 (n+1)
En,m (f ) = H f (·) (9.27)
(n + 1)! nn+2
where · ∈ (a, b). Thus, the quadrature error is an in¬nitesimal in H of
order n + 1 and the formula has degree of exactness equal to n.
Proof. We only consider the case where n is even. Using (9.19), and noticing
that Mn does not depend on the integration interval, we get
m’1 m’1
Mn
I(f )|Tj ’ In (f )|Tj hn+3 f (n+2) (ξj ),
En,m (f ) = = j
(n + 2)!
j=0 j=0

where, for j = 0, . . . , (m ’ 1), hj = |Tj |/(n + 2) = (b ’ a)/(m(n + 2)); this time,
ξj is a suitable point of Tj . Since (b ’ a)/m = H, we obtain
m’1
b’a
Mn
H n+2 f (n+2) (ξj ),
En,m (f ) =
(n + 2)! m(n + 2)n+3 j=0
9.4 Composite Newton-Cotes Formulae 385

from which, applying Theorem 9.1 with u(x) = f (n+2) (x) and δj = 1 for j =
0, . . . , m ’ 1, (9.26) immediately follows. A similar procedure can be followed to
3
prove (9.27).

We notice that, for n ¬xed, En,m (f ) ’ 0 as m ’ ∞ (i.e., as H ’ 0).
This ensures the convergence of the numerical integral to the exact value
I(f ). We notice also that the degree of exactness of composite formulae
coincides with that of simple formulae, whereas its order of in¬nitesimal
(with respect to H) is reduced by 1 with respect to the order of in¬nitesimal
(in h) of simple formulae.
In practical computations, it is convenient to resort to a local interpolation
of low degree (typically n ¤ 2, as done in Section 9.2), this leads to com-
posite quadrature rules with positive weights, with a minimization of the
rounding errors.

Example 9.4 For the same integral (9.24) considered in Example 9.3, we show
in Table 9.7 the behavior of the absolute error as a function of the number of
subintervals m, in the case of the composite midpoint, trapezoidal and Cavalieri-
Simpson formulae. Convergence of In,m (f ) to I(f ) as m increases can be clearly
observed. Moreover, we notice that E0,m (f ) E1,m (f )/2 for m ≥ 32 (see Exer-
cise 1).

|E0,m | |E1,m | |E2,m |
m
1 7.253 2.362 4.04
9.65 · 10’2
2 1.367 2.445
3.90 · 10’2 3.77 · 10’2 1.35 · 10’2
8
1.20 · 10’4 2.40 · 10’4 4.55 · 10’8
32
7.52 · 10’6 1.50 · 10’5 1.63 · 10’10
128
4.70 · 10’7 9.40 · 10’7 6.36 · 10’13
512
TABLE 9.7. Absolute error for composite quadratures in the computation of
(9.24)



Convergence of In,m (f ) to I(f ) can be established under less stringent
regularity assumptions on f than those required by Theorem 9.3. In this
regard, the following result holds (see for the proof [IK66], pp. 341-343).
(j)
Property 9.1 Let f ∈ C 0 ([a, b]) and assume that the weights ±k in (9.25)
are nonnegative. Then
b
∀n ≥ 0.
lim In,m (f ) = f (x)dx,
m’∞ a
Moreover
b
f (x)dx ’ In,m (f ) ¤ 2(b ’ a)„¦(f ; H),
a
386 9. Numerical Integration

where

„¦(f ; H) = sup{|f (x) ’ f (y)|, x, y ∈ [a, b], x = y, |x ’ y| ¤ H}

is the module of continuity of function f .


9.5 Hermite Quadrature Formulae
Thus far we have considered quadrature formulae based on Lagrange inter-
polation (simple or composite). More accurate formulae can be devised by
resorting to Hermite interpolation (see Section 8.4).
Suppose that 2(n + 1) values f (xk ), f (xk ) are available at n + 1 distinct
points x0 , . . . , xn , then the Hermite interpolating polynomial of f is given
by
n n
H2n+1 f (x) = f (xi )Li (x) + f (xi )Mi (x), (9.28)
i=0 i=0

where the polynomials Lk , Mk ∈ P2n+1 are de¬ned, for k = 0, . . . , n, as

ωn+1 (xk )
Lk (x) = 1 ’ (x ’ xk ) lk (x), Mk (x) = (x ’ xk )lk (x).
2 2
ωn+1 (xk )

Integrating (9.28) over [a, b], we get the quadrature formula of type (9.4)
n n
In (f ) = ±k f (xk ) + βk f (xk ) (9.29)
k=0 k=0

where

±k = I(Lk ), βk = I(Mk ), k = 0, . . . , n.

Formula (9.29) has degree of exactness equal to 2n + 1. Taking n = 1, the
so-called corrected trapezoidal formula is obtained

b’a (b ’ a)2
[f (a) ’ f (b)]
corr
I1 (f ) = [f (a) + f (b)] + (9.30)
2 12
with weights ±0 = ±1 = (b’a)/2, β0 = (b’a)2 /12 and β1 = ’β0 . Assuming
f ∈ C 4 ([a, b]), the quadrature error associated with (9.30) is

h5 (4)
h=b’a
corr
E1 (f ) = f (ξ), (9.31)
720
with ξ ∈ (a, b). Notice the increase of accuracy from O(h3 ) to O(h5 ) with
respect to the corresponding expression (9.12) (of the same order as the
9.6 Richardson Extrapolation 387

Cavalieri-Simpson formula (9.15)). The composite formula can be generated
in a similar manner
b’a 1
corr
I1,m (f ) = [f (x0 ) + f (xm )]
m 2
(9.32)
(b ’ a)2
[f (a) ’ f (b)] ,
+f (x1 ) + . . . + f (xm’1 )} +
12

where the assumption that f ∈ C 1 ([a, b]) gives rise to the cancellation of
the ¬rst derivatives at the nodes xk , with k = 1, . . . , m ’ 1.

Example 9.5 Let us check experimentally the error estimate (9.31) in the simple
(m = 1) and composite (m > 1) cases, running Program 75 for the approximate
computation of integral (9.18). Table 9.8 reports the behavior of the module of
the absolute error as H is halved (that is, m is doubled) and the ratio Rm between
two consecutive errors. This ratio, as happens in the case of Cavalieri-Simpson
formula, tends to 16, demonstrating that formula (9.32) has order of in¬nitesimal
equal to 4. Comparing Table 9.8 with the corresponding Table 9.1, we can also
notice that |E1,m (f )| 4|E2,m (f )| (see Exercise 9). •
corr




Rm Rm Rm
corr corr corr
m E1,m (f ) m E1,m (f ) m E1,m (f )
4.4 · 10’3 1.1 · 10’6
1 3.4813 8 6.1 64 15.957
2.9 · 10’4 7.3 · 10’8
2 1.398 2.4 16 14.9 128 15.990
2.72 · 10’2 1.8 · 10’5 4.5 · 10’9

<<

. 15
( 26)



>>