TABLE 9.8. Absolute error for the corrected trapezoidal formula in the compu-

tation of I(f ) = 0 xe’x cos(2x)dx

2π

The corrected composite trapezoidal quadrature is implemented in Pro-

gram 75, where dfun contains the expression of the derivative of f .

Program 75 - trapmodc : Composite corrected trapezoidal formula

function int = trapmodc(a,b,m,fun,dfun)

h=(b-a)/m; x=[a:h:b]; y=eval(fun);

f1a=feval(dfun,a); f1b=feval(dfun,b);

int=h*(0.5*y(1)+sum(y(2:m))+0.5*y(m+1))+(hˆ2/12)*(f1a-f1b);

9.6 Richardson Extrapolation

The Richardson extrapolation method is a procedure which combines several

approximations of a certain quantity ±0 in a smart way to yield a more

accurate approximation of ±0 . More precisely, assume that a method is

available to approximate ±0 by a quantity A(h) that is computable for any

388 9. Numerical Integration

value of the parameter h = 0. Moreover, assume that, for a suitable k ≥ 0,

A(h) can be expanded as follows

A(h) = ±0 + ±1 h + . . . + ±k hk + Rk+1 (h), (9.33)

where |Rk+1 (h)| ¤ Ck+1 hk+1 . The constants Ck+1 and the coe¬cients ±i ,

for i = 0, . . . , k, are independent of h. Henceforth, ±0 = limh’0 A(h).

Writing (9.33) with δh instead of h, for 0 < δ < 1 (typically, δ = 1/2),

we get

A(δh) = ±0 + ±1 (δh) + . . . + ±k (δh)k + Rk+1 (δh).

Subtracting (9.33) multiplied by δ from this expression then yields

A(δh) ’ δA(h)

B(h) = = ±0 + ±2 h2 + . . . + ±k hk + Rk+1 (h),

1’δ

having de¬ned, for k ≥ 2, ±i = ±i (δ i ’ δ)/(1 ’ δ), for i = 2, . . . , k and

Rk+1 (h) = [Rk+1 (δh) ’ δRk+1 (h)] /(1 ’ δ).

Notice that ±i = 0 i¬ ±i = 0. In particular, if ±1 = 0, then A(h) is a ¬rst-

order approximation of ±0 , while B(h) is at least second-order accurate.

More generally, if A(h) is an approximation of ±0 of order p, then the

quantity B(h) = [A(δh) ’ δ p A(h)] /(1 ’ δ p ) approximates ±0 up to order

p + 1 (at least).

Proceeding by induction, the following Richardson extrapolation algorithm

is generated: setting n ≥ 0, h > 0 and δ ∈ (0, 1), we construct the sequences

Am,0 = A(δ m h), m = 0, . . . , n,

Am,q ’ δ q+1 Am’1,q (9.34)

Am,q+1 q = 0, . . . , n ’ 1,

= ,

1 ’ δ q+1

m = q + 1, . . . , n,

which can be represented by the diagram below

A0,0

A1,0 ’ A1,1

A2,0 ’ A2,1 ’ A2,2

A3,0 ’ A3,1 ’ A3,2 ’ A3,3

. .. .. .. ..

. . . . .

.

An,0 ’ An,1 ’ An,2 ’ An,3 ’ An,n

...

where the arrows indicate the way the terms which have been already

computed contribute to the construction of the “new” ones.

The following result can be proved (see [Com95], Proposition 4.1).

9.6 Richardson Extrapolation 389

Property 9.2 For n ≥ 0 and δ ∈ (0, 1)

Am,n = ±0 + O((δ m h)n+1 ), m = 0, . . . , n. (9.35)

In particular, for the terms in the ¬rst column (n = 0) the convergence

rate to ±0 is O((δ m h)), while for those of the last one it is O((δ m h)n+1 ),

i.e., n times higher.

Example 9.6 Richardson extrapolation has been employed to approximate at

x = 0 the derivative of the function f (x) = xe’x cos(2x), introduced in Ex-

ample 9.1. For this purpose, algorithm (9.34) has been executed with A(h) =

[f (x + h) ’ f (x)] /h, δ = 0.5, n = 5 and h = 0.1. Table 9.9 reports the sequence

of absolute errors Em,k = |±0 ’ Am,k |. The results demonstrate that the error

•

decays as predicted by (9.35).

Em,0 Em,1 Em,2 Em,3 Em,4 Em,5

0.113 “ “ “ “ “

5.3 · 10’2 6.1 · 10’3 “ “ “ “

2.6 · 10’2 1.7 · 10’3 2.2 · 10’4 “ “ “

1.3 · 10’2 4.5 · 10’4 2.8 · 10’5 5.5 · 10’7 “ “

6.3 · 10’3 1.1 · 10’4 3.5 · 10’6 3.1 · 10’8 3.0 · 10’9 “

3.1 · 10’3 2.9 · 10’5 4.5 · 10’7 1.9 · 10’9 9.9 · 10’11 4.9 · 10’12

TABLE 9.9. Errors in the Richardson extrapolation for the approximate evalua-

tion of f (0) where f (x) = xe’x cos(2x)

9.6.1 Romberg Integration

The Romberg integration method is an application of Richardson extrap-

olation to the composite trapezoidal rule. The following result, known as

the Euler-MacLaurin formula, will be useful (for its proof see, e.g., [Ral65],

pp. 131-133, and [DR75], pp. 106-111).

Property 9.3 Let f ∈ C 2k+2 ([a, b]), for k ≥ 0, and let us approximate

b

±0 = a f (x)dx by the composite trapezoidal rule (9.14). Letting hm =

(b ’ a)/m for m ≥ 1,

k

B2i 2i (2i’1)

(b) ’ f (2i’1) (a)

I1,m (f ) = ±0 + hm f

(2i)!

(9.36)

i=1

B2k+2 2k+2

hm (b ’ a)f (2k+2) (·),

+

(2k + 2)!

+∞

where · ∈ (a, b) and B2j = (’1) 2/(2nπ)2j (2j)!, for j ≥ 1, are

j’1

n=1

the Bernoulli numbers.

390 9. Numerical Integration

Equation (9.36) is a special case of (9.33) where h = h2 and A(h) =

m

I1,m (f ); notice that only even powers of the parameter h appear in the

expansion.

The Richardson extrapolation algorithm (9.34) applied to (9.36) gives

Am,0 = A(δ m h), m = 0, . . . , n,

Am,q ’ δ 2(q+1) Am’1,q (9.37)

Am,q+1 = q = 0, . . . , n ’ 1,

,

1 ’ δ 2(q+1)

m = q + 1, . . . , n.

Setting h = b ’ a and δ = 1/2 into (9.37) and denoting by T (hs ) = I1,s (f )

the composite trapezoidal formula (9.14) over s = 2m subintervals of width

hs = (b ’ a)/2m , for m ≥ 0, the algorithm (9.37) becomes

Am,0 = T ((b ’ a)/2m ), m = 0, . . . , n,

4q+1 Am,q ’ Am’1,q

Am,q+1 q = 0, . . . , n ’ 1,

= ,

4q+1 ’ 1

m = q + 1, . . . , n.

This is the Romberg numerical integration algorithm. Recalling (9.35), the

following convergence result holds for Romberg integration

b

Am,n = f (x)dx + O(h2(n+1) ), n ≥ 0.

s

a

Example 9.7 Table 9.10 shows the results obtained by running Program 76 to

π

(1)

compute the quantity ±0 in the two cases ±0 = 0 ex cos(x)dx = ’(eπ + 1)/2

1√

(2)

and ±0 = 0 xdx = 2/3.

The maximum size n has been set equal to 9. In the second and third columns

(r) (r) (r)

we show the modules of the absolute errors Ek = |±0 ’ Ak+1,k+1 |, for r = 1, 2

and k = 0, . . . , 6.

(1) (2)

The convergence to zero is much faster for Ek than for Ek . Indeed, the ¬rst

integrand function is in¬nitely di¬erentiable whereas the second is only continu-

•

ous.

(1) (2) (1) (2)

k Ek Ek k Ek Ek

8.923 · 10’7 1.074 · 10’3

0 22.71 0.1670 4

2.860 · 10’2 6.850 · 10’11 3.790 · 10’4

1 0.4775 5

5.926 · 10’2 8.910 · 10’3 5.330 · 10’14 1.340 · 10’4

2 6

7.410 · 10’5 3.060 · 10’3 4.734 · 10’5

3 7 0

TABLE 9.10. Romberg integration for the approximate evaluation of

1√

πx (1) (2)

e cos(x)dx (error Ek ) and 0 xdx (error Ek )

0

The Romberg algorithm is implemented in Program 76.

9.7 Automatic Integration 391

Program 76 - romberg : Romberg integration

function [A]=romberg(a,b,n,fun);

for i=1:(n+1), A(i,1)=trapezc(a,b,2ˆ(i-1),fun); end;

for j=2:(n+1), for i=j:(n+1),

A(i,j)=(4ˆ(j-1)*A(i,j-1)-A(i-1,j-1))/(4ˆ(j-1)-1); end; end;

9.7 Automatic Integration

An automatic numerical integration program, or automatic integrator, is

a set of algorithms which yield an approximation of the integral I(f ) =

b

f (x)dx, within a given tolerance, µa , or relative tolerance, µr , prescribed

a

by the user.

With this aim, the program generates a sequence {Ik , Ek }, for k =

1, . . . , N , where Ik is the approximation of I(f ) at the k-th step of the

computational process, Ek is an estimate of the error I(f ) ’ Ik , and is N

a suitable ¬xed integer.

The sequence terminates at the s-th level, with s ¤ N , such that the

automatic integrator ful¬lls the following requirement on the accuracy

max µa , µr |I(f )| ≥ |Es |( |I(f ) ’ Is |), (9.38)

where I(f ) is a reasonable guess of the integral I(f ) provided as an input

datum by the user. Otherwise, the integrator returns the last computed

approximation IN , together with a suitable error message that warns the

user of the algorithm™s failure to converge.

Ideally, an automatic integrator should:

(a) provide a reliable criterion for determining |Es | that allows for moni-

toring the convergence check (9.38);

(b) ensure an e¬cient implementation, which minimizes the number of

functional evaluations for yielding the desired approximation Is .

In computational practice, for each k ≥ 1, moving from level k to level

k + 1 of the automatic integration process can be done according to two

di¬erent strategies, which we de¬ne as non adaptive or adaptive.

In the non adaptive case, the law of distribution of the quadrature nodes

is ¬xed a priori and the quality of the estimate Ik is re¬ned by increas-

ing the number of nodes corresponding to each level of the computational

process. An example of an automatic integrator that is based on such a

procedure is provided by the composite Newton-Cotes formulae on m and

392 9. Numerical Integration

2m subintervals, respectively, at levels k and k + 1, as described in Section

9.7.1.

In the adaptive case, the positions of the nodes is not set a priori, but at

each level k of the process they depend on the information that has been

stored during the previous k ’ 1 levels. An adaptive automatic integration

algorithm is performed by partitioning the interval [a, b] into successive

subdivisions which are characterized by a nonuniform density of the nodes,

this density being typically higher in a neighborhood of strong gradients

or singularities of f . An example of an adaptive integrator based on the

Cavalieri-Simpson formula is described in Section 9.7.2.

9.7.1 Non Adaptive Integration Algorithms

In this section, we employ the composite Newton-Cotes formulae. Our aim

is to devise a criterion for estimating the absolute error |I(f ) ’ Ik | by

using Richardson extrapolation. From (9.26) and (9.27) it turns out that,

for m ≥ 1 and n ≥ 0, In,m (f ) has order of in¬nitesimal equal to H n+p ,

with p = 2 for n even and p = 1 for n odd, where m, n and H = (b ’ a)/m

are the number of partitions of [a, b], the number of quadrature nodes over

each subinterval and the constant length of each subinterval, respectively.

By doubling the value of m (i.e., halving the stepsize H) and proceeding

by extrapolation, we get

1

I(f ) ’ In,2m (f ) [I(f ) ’ In,m (f )] . (9.39)

2n+p

The use of the symbol instead of = is due to the fact that the point ξ

or ·, where the derivative in (9.26) and (9.27) must be evaluated, changes

when passing from m to 2m subintervals. Solving (9.39) with respect to

I(f ) yields the following absolute error estimate for In,2m (f )

In,2m (f ) ’ In,m (f )

I(f ) ’ In,2m (f ) . (9.40)

2n+p ’ 1

If the composite Simpson rule is considered (i.e., n = 2), (9.40) predicts a

reduction of the absolute error by a factor of 15 when passing from m to 2m

subintervals. Notice also that only 2m’1 extra functional evaluations are

needed to compute the new approximation I1,2m (f ) starting from I1,m (f ).

Relation (9.40) is an instance of an a posteriori error estimate (see Chapter

2, Section 2.3). It is based on the combined use of an a priori estimate (in

this case, (9.26) or (9.27)) and of two evaluations of the quantity to be ap-

proximated (the integral I(f )) for two di¬erent values of the discretization

parameter (that is, H = (b ’ a)/m).

9.7 Automatic Integration 393

Example 9.8 Let us employ the a posteriori estimate (9.40) in the case of the

composite Simpson formula (n = p = 2), for the approximation of the integral

π

(ex/2 + cos 4x)dx = 2(eπ ’ 1) 7.621,

0

where we require the absolute error to be less than 10’4 . For k = 0, 1, . . . , set

hk = (b’a)/2k and denote by I2,m(k) (f ) the integral of f which is computed using

the composite Simpson formula on a grid of size hk with m(k) = 2k intervals. We

can thus assume as a conservative estimate of the quadrature error the following

quantity

1

|Ek | = |I(f ) ’ I2,m(k) (f )| |I2,2m(k) (f ) ’ I2,m(k) (f )| = |Ek |, k ≥ 1.

10

(9.41)

Table 9.11 shows the sequence of the estimated errors |Ek | and of the correspond-

ing absolute errors |Ek | that have been actually made by the numerical integration

process. Notice that, when convergence has been achieved, the error estimated

by (9.41) is de¬nitely higher than the actual error, due to the conservative choice

•

above.

|Ek | |Ek | |Ek | |Ek |

k k

4.52 · 10’5

0 3.156 2 0.10

5.8 · 10’6 2 · 10’9

1 0.42 1.047 3

TABLE 9.11. Non adaptive automatic Simpson rule for the approximation of

π x/2

(e + cos 4x)dx

0

An alternative approach for ful¬lling the constraints (a) and (b) con-

sists of employing a nested sequence of special Gaussian quadratures Ik (f )

(see Chapter 10), having increasing degree of exactness for k = 1, . . . , N .

These formulae are constructed in such a way that, denoting by Snk =

{x1 , . . . , xnk } the set of quadrature nodes relative to quadrature Ik (f ),

Snk ‚ Snk+1 for any k = 1, . . . , N ’ 1. As a result, for k ≥ 1, the formula

at the k + 1-th level employs all the nodes of the formula at level k and

this makes nested formulae quite e¬ective for computer implementation.

As an example, we recall the Gauss-Kronrod formulae with 10, 21, 43

¨

and 87 points, that are available in [PdKUK83] (in this case, N = 4). The

Gauss-Kronrod formulae have degree of exactness rnk (optimal) equal to

2nk ’1, where nk is the number of nodes for each formula, with n1 = 10 and

nk+1 = 2nk +1 for k = 1, 2, 3. The criterion for devising an error estimate is

based on comparing the results given by two successive formulae Ink (f ) and

Ink+1 (f ) with k = 1, 2, 3, and then terminating the computational process

at the level k such that (see also [DR75], p. 321)

|Ik+1 ’ Ik | ¤ max {µa , µr |Ik+1 |} .

394 9. Numerical Integration

9.7.2 Adaptive Integration Algorithms

The goal of an adaptive integrator is to yield an approximation of I(f )

within a ¬xed tolerance µ by a non uniform distribution of the integration

stepsize along the interval [a, b]. An optimal algorithm is able to adapt

automatically the choice of the steplength according to the behavior of the

integrand function, by increasing the density of the quadrature nodes where

the function exhibits stronger variations.

In view of describing the method, it is convenient to restrict our attention

to a generic subinterval [±, β] ⊆ [a, b]. Recalling the error estimates for the

Newton-Cotes formulae, it turns out that the evaluation of the derivatives

of f , up to a certain order, is needed to set a stepsize h such that a ¬xed

accuracy is ensured, say µ(β’±)/(b’a). This procedure, which is unfeasible

in practical computations, is carried out by an automatic integrator as

follows. We consider throughout this section the Cavalieri-Simpson formula

(9.15), although the method can be extended to other quadrature rules.

β

Set If (±, β) = ± f (x)dx, h = h0 = (β ’ ±)/2 and

Sf (±, β) = (h0 /3) [f (±) + 4f (± + h0 ) + f (β)] .

From (9.16) we get

h5 (4)

If (±, β) ’ Sf (±, β) = ’ 0 f (ξ), (9.42)

90

where ξ is a point in (±, β). To estimate the error If (±, β) ’ Sf (±, β)

without using explicitly the function f (4) we employ again the Cavalieri-

Simpson formula over the union of the two subintervals [±, (± + β)/2] and

[(± + β)/2, β], obtaining, for h = h0 /2 = (β ’ ±)/4

(h0 /2)5

If (±, β) ’ Sf,2 (±, β) = ’ f (4) (ξ) + f (4) (·) ,

90

where ξ ∈ (±, (± + β)/2), · ∈ ((± + β)/2, β) and Sf,2 (±, β) = Sf (±, (± +

β)/2) + Sf ((± + β)/2, β).

Let us now make the assumption that f (4) (ξ) f (4) (·) (which is true,

in general, only if the function f (4) does not vary “too much” on [±, β]).

Then,

1 h5 (4)

If (±, β) ’ Sf,2 (±, β) ’ 0

f (ξ), (9.43)

16 90

with a reduction of the error by a factor 16 with respect to (9.42), corre-

sponding to the choice of a steplength of doubled size. Comparing (9.42)

and (9.43), we get the estimate

h5 (4) 16

Ef (±, β),

0

f (ξ)

90 15

9.7 Automatic Integration 395

where Ef (±, β) = Sf (±, β) ’ Sf,2 (±, β). Then, from (9.43), we have

|Ef (±, β)|

|If (±, β) ’ Sf,2 (±, β)| . (9.44)

15

We have thus obtained a formula that allows for easily computing the error

made by using composite Cavalieri-Simpson numerical integration on the

generic interval [±, β]. Relation (9.44), as well as (9.40), is another instance

of an a posteriori error estimate. It combines the use of an a priori es-

timate (in this case, (9.16)) and of two evaluations of the quantity to be

approximated (the integral I(f )) for two di¬erent values of the discretiza-

tion parameter h.

In the practice, it might be convenient to assume a more conservative

error estimate, precisely

|If (±, β) ’ Sf,2 (±, β)| |Ef (±, β)|/10.

Moreover, to ensure a global accuracy on [a, b] equal to the ¬xed tolerance

µ, it will su¬ce to enforce that the error Ef (±, β) satis¬es on each single

subinterval [±, β] ⊆ [a, b] the following constraint

|Ef (±, β)| β’±

¤µ . (9.45)

b’a

10

The adaptive automatic integration algorithm can be described as follows.

Denote by:

1. A: the active integration interval, i.e., the interval where the integral

is being computed;

2. S: the integration interval already examined, for which the error test

(9.45) has been successfully passed;

3. N : the integration interval yet to be examined.

At the beginning of the integration process we have N = [a, b], A = N

and S = …, while the situation at the generic step of the algorithm is

±

depicted in Figure 9.3. Set JS (f ) f (x)dx, with JS (f ) = 0 at the

a

beginning of the process; if the algorithm successfully terminates, JS (f )

yields the desired approximation of I(f ). We also denote by J(±,β) (f ) the

approximate integral of f over the “active” interval [±, β]. This interval

is drawn in bold in Figure 9.3. At each step of the adaptive integration

method the following decisions are taken:

1. if the local error test (9.45) is passed, then:

(i) JS (f ) is increased by J(±,β) (f ), that is, JS (f ) ← JS (f )+J(±,β) (f );

(ii) we let S ← S ∪ A, A = N (corresponding to the path (I) in

Figure 9.3), β = b.

396 9. Numerical Integration

S ± Aβ N

a b

(I)

S A

a ± b

(II)

S ± A± N

a b

FIGURE 9.3. Distribution of the integration intervals at the generic step of the

adaptive algorithm and updating of the integration grid

2. If the local error test (9.45) fails, then:

(j) A is halved, and the new active interval is set to A = [±, ± ] with

± = (± + β)/2 (corresponding to the path (II) in Figure 9.3);

(jj) we let N ← N ∪ [± , β], β ← ± ;

(jjj) a new error estimate is provided.

In order to prevent the algorithm from generating too small stepsizes, it

is convenient to monitor the width of A and warn the user, in case of

an excessive reduction of the steplength, about the presence of a possible

singularity in the integrand function (see Section 9.8).

Example 9.9 Let us employ Cavalieri-Simpson adaptive integration for com-

puting the integral

4

tan’1 (10x)dx

I(f ) =

’3

= 4tan’1 (40) + 3tan’1 (’30) ’ (1/20) log(16/9) 1.54201193.

Running Program 77 with tol = 10’4 and hmin = 10’3 yields an approximation

of the integral with an absolute error of 2.104 · 10’5 . The algorithm performs

77 functional evaluations, corresponding to partitioning the interval [a, b] into

38 nonuniform subintervals. We notice that the corresponding composite formula

with uniform stepsize would have required 128 subintervals with an absolute error

of 2.413 · 10’5 .

In Figure 9.4 (left) we show, together with the plot of the integrand function,

the distribution of the quadrature nodes as a function of x, while on the right

the integration step density (piecewise constant) ∆h (x) is shown, de¬ned as the

inverse of the step size h over each active interval A. Notice the high value attained

by ∆h at x = 0, where the derivative of the integrand function is maximum. •

The adaptive algorithm described above is implemented in Program 77.

Among the input parameters, hmin is the minimum admissible value of

the integration steplength. In output the program returns the approximate

9.7 Automatic Integration 397

80

2

70

1.5

60

1

50

0.5

40

0

30

’0.5

20

’1

10

’1.5

0

’2

’3 ’2 ’1 0 1 2 3 4

’3 ’2 ’1 0 1 2 3 4

FIGURE 9.4. Distribution of quadrature nodes (left); density of the integration

stepsize in the approximation of the integral of Example 9.9 (right)

value of the integral integ, the total number of functional evaluations nfv

and the set of integration points xfv.

Program 77 - simpadpt : Adaptive Cavalieri-Simpson formula

function [integ,xfv,nfv]=simpadpt(a,b,tol,fun,hmin);

integ=0; level=0; i=1; alfa(i)=a; beta(i)=b;

step=(beta(i)-alfa(i))/4; nfv=0;

for k=1:5, x=a+(k-1)*step; f(i,k)=eval(fun); nfv=nfv+1; end

while (i > 0),

S=0; S2=0; h=(beta(i)-alfa(i))/2; S=(h/3)*(f(i,1)+4*f(i,3)+f(i,5));

h=h/2; S2=(h/3)*(f(i,1)+4*f(i,2)+f(i,3));

S2=S2+(h/3)*(f(i,3)+4*f(i,4)+f(i,5));

tolrv=tol*(beta(i)-alfa(i))/(b-a); errrv=abs(S-S2)/10;

if (errrv > tolrv)

i=i+1; alfa(i)=alfa(i-1); beta(i)=(alfa(i-1)+beta(i-1))/2;

f(i,1)=f(i-1,1);f(i,3)=f(i-1,2);f(i,5)=f(i-1,3);len=abs(beta(i)-alfa(i));

if (len >= hmin),

if (len <= 11*hmin)

disp(™ Steplength close to hmin ™),

str=sprintf(™The approximate integral is %12.7e™,integ);disp(str),end;

step=len/4; x=alfa(i)+step; f(i,2)=eval(fun);

nfv=nfv+1; x=beta(i)-step; f(i,4)=eval(fun); nfv=nfv+1;

else, xfv=xfv™; disp(™ Too small steplength ™)

str=sprintf(™The approximate integral is %12.7e™,integ);

disp(str), return

end, else

integ=integ+S2; level=level+1; if (level==1),

for k=1:5, xfv(k)=alfa(i)+(k-1)*h; end; ist=5;

else, for k=1:4, xfv(ist+k)=alfa(i)+k*h; end; ist=ist+4; end;

if (beta(i)==b), xfv=xfv™;

str=sprintf(™The approximate integral is %12.7e™,integ);

disp(str), return, end; i=i-1; alfa(i)=beta(i+1);

f(i,1)=f(i+1,5); f(i,3)=f(i,4); step=abs(beta(i)-alfa(i))/4;

x=alfa(i)+step; f(i,2)=eval(fun); nfv=nfv+1; x=beta(i)-step;

f(i,4)=eval(fun); nfv=nfv+1;

398 9. Numerical Integration

end

end

9.8 Singular Integrals

In this section we extend our analysis to deal with singular integrals, arising

when f has ¬nite jumps or is even in¬nite at some point. Besides, we

will consider the case of integrals of bounded functions over unbounded

intervals. We brie¬‚y address the most relevant numerical techniques for

properly handling these integrals.

9.8.1 Integrals of Functions with Finite Jump Discontinuities

Let c be a known point within [a, b] and assume that f is a continuous and

bounded function in [a, c) and (c, b], with ¬nite jump f (c+ ) ’ f (c’ ). Since

b c b

I(f ) = f (x)dx = f (x)dx + f (x)dx, (9.46)

a a c

any integration formula of the previous sections can be used on [a, c’ ] and

[c+ , b] to furnish an approximation of I(f ). We proceed similarly if f admits

a ¬nite number of jump discontinuities within [a, b].

When the position of the discontinuity points of f is not known a priori,

a preliminary analysis of the graph of the function should be carried out.

Alternatively, one can resort to an adaptive integrator that is able to detect

the presence of discontinuities when the integration steplength falls below

a given tolerance (see Section 9.7.2).

9.8.2 Integrals of In¬nite Functions

Let us deal with the case in which limx’a+ f (x) = ∞; similar consider-

ations hold when f is in¬nite as x ’ b’ , while the case of a point of

singularity c internal to the interval [a, b] can be recast to one of the pre-

vious two cases owing to (9.46). Assume that the integrand function is of

the form

φ(x)

0 ¤ µ < 1,

f (x) = ,

(x ’ a)µ

where φ is a function whose absolute value is bounded by M . Then

b

(b ’ a)1’µ

1

|I(f )| ¤ M lim dx = M .

(x ’ a)µ 1’µ

+

t’a

t

9.8 Singular Integrals 399

Suppose we wish to approximate I(f ) up to a ¬xed tolerance δ. For this, let

us describe the following two methods (for further details, see also [IK66],

Section 7.6, and [DR75], Section 2.12 and Appendix 1).

Method 1. For any µ such that 0 < µ < (b ’ a), we write the singular

integral as I(f ) = I1 + I2 , where

a+µ b

φ(x) φ(x)

I1 = dx, I2 = dx.

(x ’ a)µ (x ’ a)µ

a a+µ

The computation of I2 is not troublesome. After replacing φ by its p-th

order Taylor™s expansion around x = a, we obtain

(x ’ a)p+1 (p+1)

p≥0

φ(x) = ¦p (x) + φ (ξ(x)), (9.47)

(p + 1)!

p

φ(k) (a)(x ’ a)k /k!. Then

where ¦p (x) = k=0

a+µ

p

µk φ(k) (a) 1

(x ’ a)p+1’µ φ(p+1) (ξ(x))dx.

I1 = µ1’µ +

k!(k + 1 ’ µ) (p + 1)!

k=0 a

Replacing I1 by the ¬nite sum, the corresponding error E1 can be bounded

as

µp+2’µ

|E1 | ¤ max |φ(p+1) (x)|, p ≥ 0. (9.48)

(p + 1)!(p + 2 ’ µ) a¤x¤a+µ

For ¬xed p, the right side of (9.48) is an increasing function of µ. On the

other hand, taking µ < 1 and assuming that the successive derivatives of φ

do not grow too much as p increases, the same function is decreasing as p

grows.

Let us next approximate I2 using a composite Newton-Cotes formula

with m subintervals and n quadrature nodes for each subinterval, n being

an even integer. Recalling (9.26) and aiming at equidistributing the error

δ between I1 and I2 , it turns out that

n+2

b ’ a ’ µ |Mn | b’a’µ

|E2 | ¤ M (n+2)

(µ) = δ/2, (9.49)

(n + 2)! nn+3 m

where

dn+2 φ(x)

M (n+2)

(µ) = max .

(x ’ a)µ

dxn+2

a+µ¤x¤b

The value of the constant M(n+2) (µ) grows rapidly as µ tends to zero; as

a consequence, (9.49) might require such a large number of subintervals

mµ = m(µ) to make the method at hand of little practical use.

400 9. Numerical Integration

Example 9.10 Consider the singular integral (known as the Fresnel integral)

π/2

cos(x)

√ dx.

I(f ) = (9.50)

x

0

Expanding the integrand function in a Taylor™s series around the origin and

applying the theorem of integration by series, we get

∞

(’1)k 1

(π/2)2k+1/2 .

I(f ) =

(2k)! (2k + 1/2)

k=0

Truncating the series at the ¬rst 10 terms, we obtain an approximate value of

the integral equal to 1.9549.

Using the composite Cavalieri-Simpson formula, the a priori estimate (9.49)

yields, as µ tends to zero and letting n = 2, |M2 | = 4/15,

1/4

0.018 π 5

µ’9/2

’µ

mµ .

δ 2

For δ = 10’4 , taking µ = 10’2 , it turns out that 1140 (uniform) subintervals are

needed, while for µ = 10’4 and µ = 10’6 the number of subintervals is 2 · 105

and 3.6 · 107 , respectively.

As a comparison, running Program 77 (adaptive integration with Cavalieri-

Simpson formula) with a = µ = 10’10 , hmin = 10’12 and tol = 10’4 , we

get the approximate value 1.955 for the integral at the price of 1057 functional

evaluations, which correspond to 528 nonuniform subdivisions of the interval

•

[0, π/2].

Method 2. Using the Taylor expansion (9.47) we obtain

b b

φ(x) ’ ¦p (x) ¦p (x)

I(f ) = dx + dx = I1 + I2 .

(x ’ a)µ (x ’ a)µ

a a

Exact computation of I2 yields

p

(b ’ a)k φ(k) (a)

I2 = (b ’ a) 1’µ

. (9.51)

k!(k + 1 ’ µ)

k=0

The integral I1 is, for p ≥ 0

b b

φ(p+1) (ξ(x))

(x ’ a)p+1’µ

I1 = dx = g(x)dx. (9.52)

(p + 1)!

a a

Unlike the case of method 1, the integrand function g does not blow up

at x = a, since its ¬rst p derivatives are ¬nite at x = a. As a consequence,

assuming we approximate I1 using a composite Newton-Cotes formula, it is

possible to give an estimate of the quadrature error, provided that p ≥ n+2,

for n ≥ 0 even, or p ≥ n + 1, for n odd.

9.8 Singular Integrals 401

Example 9.11 Consider again the singular Fresnel integral (9.50), and assume

we use the composite Cavalieri-Simpson formula for approximating I1 . We will

take p = 4 in (9.51) and (9.52). Computing I2 yields the value (π/2)1/2 (2 ’

(1/5)(π/2)2 + (1/108)(π/2)4 ) 1.9588. Applying the error estimate (9.26) with

n = 2 shows that only 2 subdivisions of [0, π/2] su¬ce for approximating I1 up

to an error δ = 10’4 , obtaining the value I1 ’0.0173. As a whole, method 2

•

returns for (9.50) the approximate value 1.9415.

9.8.3 Integrals over Unbounded Intervals

Let f ∈ C 0 ([a, +∞)); should it exist and be ¬nite, the following limit

t

lim f (x)dx

t’+∞

a

is taken as being the value of the singular integral

t

∞

I(f ) = f (x)dx = lim f (x)dx. (9.53)

t’+∞

a

a

An analogous de¬nition holds if f is continuous over (’∞, b], while for a

function f : R ’ R, integrable over any bounded interval, we let

∞ c +∞

f (x)dx = f (x)dx + f (x)dx (9.54)

’∞ ’∞ c

if c is any real number and the two singular integrals on the right hand side

of (9.54) are convergent. This de¬nition is correct since the value of I(f )

does not depend on the choice of c.

A su¬cient condition for f to be integrable over [a, +∞) is that

∃ρ > 0, such that lim x1+ρ f (x) = 0,

x’+∞

that is, we require f to be in¬nitesimal of order > 1 with respect to 1/x

as x ’ ∞. For the numerical approximation of (9.53) up to a tolerance δ,

we consider the following methods, referring for further details to [DR75],

Chapter 3.

Method 1. To compute (9.53), we can split I(f ) as I(f ) = I1 + I2 , where

∞

c

I1 = a f (x)dx and I2 = c f (x)dx.

The end-point c, which can be taken arbitrarily, is chosen in such a way

that the contribution of I2 is negligible. Precisely, taking advantage of the

asymptotic behavior of f , c is selected to guarantee that I2 equals a fraction

of the ¬xed tolerance, say, I2 = δ/2.

Then, I1 will be computed up to an absolute error equal to δ/2. This

ensures that the global error in the computation of I1 + I2 is below the

tolerance δ.

402 9. Numerical Integration

Example 9.12 Compute up to an error δ = 10’3 the integral

∞

cos2 (x)e’x dx = 3/5.

I(f ) =

0

∞

∞

cos2 (x)e’x dx ¤ e’x dx = e’c ; re-

For any given c > 0, we have I2 =

c

c

’c

quiring that e = δ/2, one gets c 7.6. Then, assuming we use the compos-

ite trapezoidal formula for approximating I1 , thanks to (9.27) with n = 1 and

1/2

M = max0¤x¤c |f (x)| 1.04, we obtain m ≥ M c3 /(6δ) = 277.

Program 72 returns the value I1 0.599905, instead of the exact value I1 =

’c

3/5 ’ e (cos (c) ’ (sin(2c) + 2 cos(2c))/5) 0.599842, with an absolute error of

2

about 6.27 · 10’5 . The global numerical outcome is thus I1 + I2 0.600405, with

an absolute error with respect to I(f ) equal to 4.05 · 10’4 . •

Method 2. For any real number c, we let I(f ) = I1 + I2 , as for method 1,

then we introduce the change of variable x = 1/t in order to transform I2

into an integral over the bounded interval [0, 1/c]

1/c 1/c

f (t)t’2 dt =

I2 = g(t)dt. (9.55)

0 0

If g(t) is not singular at t = 0, (9.55) can be treated by any quadrature

formula introduced in this chapter. Otherwise, one can resort to the inte-

gration methods considered in Section 9.8.2.

Method 3. Gaussian interpolatory formulae are used, where the integra-

tion nodes are the zeros of Laguerre and Hermite orthogonal polynomials

(see Section 10.5).

9.9 Multidimensional Numerical Integration

Let „¦ be a bounded domain in R2 with a su¬ciently smooth boundary. We

consider the problem of approximating the integral I(f ) = „¦ f (x, y)dxdy,

where f is a continuous function in „¦. For this purpose, in Sections 9.9.1

and 9.9.2 we address two methods.

The ¬rst method applies when „¦ is a normal domain with respect to a

coordinate axis. It is based on the reduction formula for double integrals

and consists of using one-dimensional quadratures along both coordinate

direction. The second method, which applies when „¦ is a polygon, consists

of employing composite quadratures of low degree on a triangular decom-

position of the domain „¦. Section 9.9.3 brie¬‚y addresses the Monte Carlo

9.9 Multidimensional Numerical Integration 403

method, which is particularly well-suited to integration in several dimen-

sions.

9.9.1 The Method of Reduction Formula

Let „¦ be a normal domain with respect to the x axis, as drawn in Figure

9.5, and assume for the sake of simplicity that φ2 (x) > φ1 (x), ∀x ∈ [a, b].

FIGURE 9.5. Normal domain with respect to x axis

The reduction formula for double integrals gives (with obvious choice of

notation)

b φ2 (x) b

I(f ) = f (x, y)dydx = Ff (x)dx. (9.56)

a φ1 (x) a

b

The integral I(Ff ) = a Ff (x)dx can be approximated by a composite

quadrature rule using Mx subintervals {Jk , k = 1, . . . , Mx }, of width H =

(k) (k)

(b ’ a)/Mx , and in each subinterval nx + 1 nodes {xk , i = 0, . . . , nx }.

i

Thus, in the x direction we can write

(k)

Mx nx

c

±i Ff (xk ),

k

Inx (f ) = i

k=1 i=0

k

where the coe¬cients ±i are the quadrature weights on each subinterval Jk .

For each node xk , the approximate evaluation of the integral Ff (xk ) is then

i i

carried out by a composite quadrature using My subintervals {Jm , m =

1, . . . , My }, of width hk = (φ2 (xk ) ’ φ1 (xk ))/My and in each subinterval

i i i

(m) (m)

i,k

ny + 1 nodes {yj,m , j = 0, . . . , ny }.

(k) (m)

In the particular case Mx = My = M , nx = ny = 0, for k, m =

1, . . . , M , the resulting quadrature formula is the midpoint reduction for-

mula

M M

0,k

c

hk f (xk , y0,m ),

I0,0 (f ) =H 0 0

m=1

k=1

404 9. Numerical Integration

where H = (b ’ a)/M , xk = a + (k ’ 1/2)H for k = 1, . . . , M and

0

0,k

y0,m = φ1 (x0 ) + (m ’ 1/2)hk for m = 1, . . . , M . With a similar procedure

k

0

the trapezoidal reduction formula can be constructed along the coordinate

(k) (m)

directions (in that case, nx = ny = 1, for k, m = 1, . . . , M ).

The e¬ciency of the approach can obviously be increased by employing

the adaptive method described in Section 9.7.2 to suitably allocate the

i,k

quadrature nodes xk and yj,m according to the variations of f over the

i

domain „¦. The use of the reduction formulae above becomes less and less

convenient as the dimension d of the domain „¦ ‚ Rd gets larger, due to

the large increase in the computational e¬ort. Indeed, if any simple integral

requires N functional evaluations, the overall cost would be equal to N d .

The midpoint and trapezoidal reduction formulae for approximating the

integral (9.56) are implemented in Programs 78 and 79. For the sake of

simplicity, we set Mx = My = M . The variables phi1 and phi2 contain

the expressions of the functions φ1 and φ2 which delimitate the integration

domain.

Program 78 - redmidpt : Midpoint reduction formula

function inte=redmidpt(a,b,phi1,phi2,m,fun)

H=(b-a)/m; xx=[a+H/2:H:b]; dim=max(size(xx));

for i=1:dim, x=xx(i); d=eval(phi2); c=eval(phi1); h=(d-c)/m;

y=[c+h/2:h:d]; w=eval(fun); psi(i)=h*sum(w(1:m)); end;

inte=H*sum(psi(1:m));

Program 79 - redtrap : Trapezoidal reduction formula

function inte=redtrap(a,b,phi1,phi2,m,fun)

H=(b-a)/m; xx=[a:H:b]; dim=max(size(xx));

for i=1:dim, x=xx(i); d=eval(phi2); c=eval(phi1); h=(d-c)/m;

y=[c:h:d]; w=eval(fun); psi(i)=h*(0.5*w(1)+sum(w(2:m))+0.5*w(m+1));

end; inte=H*(0.5*psi(1)+sum(psi(2:m))+0.5*psi(m+1));

9.9.2 Two-Dimensional Composite Quadratures

In this section we extend to the two-dimensional case the composite inter-

polatory quadratures that have been considered in Section 9.4. We assume

that „¦ is a convex polygon on which we introduce a triangulation Th of NT

triangles or elements, such that „¦ = T , where the parameter h > 0 is

T ∈Th

the maximum edge-length in Th (see Section 8.5.2).

Exactly as happens in the one-dimensional case, interpolatory composite

quadrature rules on triangles can be devised by replacing „¦ f (x, y)dxdy

with „¦ Πk f (x, y)dxdy, where, for k ≥ 0, Πk f is the composite interpolat-

h h

ing polynomial of f on the triangulation Th introduced in Section 8.5.2.

9.9 Multidimensional Numerical Integration 405

For an e¬cient evaluation of this last integral, we employ the property of

additivity which, combined with (8.38), leads to the following interpolatory

composite rule

Πk f (x, y)dxdy = Πk f (x, y)dxdy = T

c

Ik (f ) = Ik (f )

h T

T ∈Th T T ∈Th

„¦

(9.57)

dk ’1 dk ’1

f (˜T ) lj (x, y)dxdy =

T

±j f (˜T ).

T

= zj zj

T ∈Th j=0 T ∈Th j=0

T

(j) (j)

˜

The coe¬cients ±T and the points zT are called the local weights and

nodes of the quadrature formula (9.57), respectively.

(j) ˆ

The weights ±T can be computed on the reference triangle T of vertices

(0, 0), (1, 0) and (0, 1), as follows

(j)

lj,T (x, y)dxdy = 2|T | ˆj (ˆ, y )dˆdˆ, j = 0, . . . , dk ’ 1,

±T = l xˆ xy

T ˆ

T

(0)

where |T | is the area of T . If k = 0, we get ±T = |T |, while if k = 1 we

(j)

have ±T = |T |/3, for j = 0, 1, 2.

(j) (j)

3

Denoting respectively by aT and aT = j=1 (aT )/3, for j = 1, 2, 3, the

vertices and the center of gravity of the triangle T ∈ Th , the following

formulae are obtained.

Composite midpoint formula

|T |f (aT ).

c

I0 (f ) = (9.58)

T ∈Th

Composite trapezoidal formula

3

1 (j)

|T |

c

I1 (f ) = f (aT ). (9.59)

3

T ∈Th j=1

In view of the analysis of the quadrature error Ek (f ) = I(f ) ’ Ik (f ), we

c c

introduce the following de¬nition.

De¬nition 9.1 The quadrature formula (9.57) has degree of exactness

equal to n, with n ≥ 0, if Ik (p) = T pdxdy for any p ∈ Pn (T ), where

T

Pn (T ) is de¬ned in (8.35).

The following result can be proved (see [IK66], pp. 361“362).

406 9. Numerical Integration

Property 9.4 Assume that the quadrature rule (9.57) has degree of exact-

ness on „¦ equal to n, with n ≥ 0, and that its weights are all nonnegative.

Then, there exists a positive constant Kn , independent of h, such that

|Ek (f )| ¤ Kn hn+1 |„¦|Mn+1 ,

c

for any function f ∈ C n+1 („¦), where Mn+1 is the maximum value of the

modules of the derivatives of order n + 1 of f and |„¦| is the area of „¦.

The composite formulae (9.58) and (9.59) both have degrees of exactness

equal to 1; then, due to Property 9.4, their order of in¬nitesimal with

respect to h is equal to 2.

An alternative family of quadrature rules on triangles is provided by the so-

called symmetric formulae. These are Gaussian formulae with n nodes and

high degree of exactness, and exhibit the feature that the quadrature nodes

occupy symmetric positions with respect to all corners of the reference

triangle T or, as happens for Gauss-Radau formulae, with respect to the

straight line y = x.

Considering the generic triangle T ∈ Th and denoting by aT , j = 1, 2, 3,

(j)

the midpoints of the edges of T , two examples of symmetric formulae,

having degree of exactness equal to 2 and 3, respectively, are the following

3

|T |

f (aT ),

I3 (f ) = n = 3,

(j)

3 j=1

«

3 3

|T |

3 f (aT ) + 8 f (aT ) + 27f (aT ) ,

(i)

I7 (f ) = n = 7.

(j)

60 i=1 j=1

For a description and analysis of symmetric formulae for triangles, see

[Dun85], while we refer to [Kea86] and [Dun86] for their extension to tetra-

hedra and cubes.

The composite quadrature rules (9.58) and (9.59) are implemented in

Programs 80 and 81 for the approximate evaluation of the integral of f (x, y)

over a single triangle T ∈ Th . To compute the integral over „¦ it su¬ces

to sum the result provided by the program over each triangle of Th . The

coordinates of the vertices of the triangle T are stored in the arrays xv and

yv.

Program 80 - midptr2d : Midpoint rule on a triangle

function inte=midptr2d(xv,yv,fun)

y12=yv(1)-yv(2); y23=yv(2)-yv(3); y31=yv(3)-yv(1);

areat=0.5*abs(xv(1)*y23+xv(2)*y31+xv(3)*y12);

x=sum(xv)/3; y=sum(yv)/3; inte=areat*eval(fun);

Program 81 - traptr2d : Trapezoidal rule on a triangle

9.9 Multidimensional Numerical Integration 407

function inte=traptr2d(xv,yv,fun)

y12=yv(1)-yv(2); y23=yv(2)-yv(3); y31=yv(3)-yv(1);

areat=0.5*abs(xv(1)*y23+xv(2)*y31+xv(3)*y12); inte=0;

for i=1:3, x=xv(i); y=yv(i); inte=inte+eval(fun); end;

inte=inte*areat/3;

9.9.3 Monte Carlo Methods for Numerical Integration

Numerical integration methods based on Monte Carlo techniques are a valid

tool for approximating multidimensional integrals when the space dimen-

sion of Rn gets large. These methods di¬er from the approaches considered

thus far, since the choice of quadrature nodes is done statistically accord-

ing to the values attained by random variables having a known probability

distribution.

The basic idea of the method is to interpret the integral as a statistic

mean value

f (x)dx = |„¦| |„¦|’1 χ„¦ (x)f (x)dx = |„¦|µ(f ),

Rn

„¦

where x = (x1 , x2 , . . . , xn )T and |„¦| denotes the n-dimensional volume of

„¦, χ„¦ (x) is the characteristic function of the set „¦, equal to 1 for x ∈ „¦ and

to 0 elsewhere, while µ(f ) is the mean value of the function f (X), where

X is a random variable with uniform probability density |„¦|’1 χ„¦ over Rn .

We recall that the random variable X ∈ Rn (or, more properly, random

vector) is an n-tuple of real numbers X1 (ζ), . . . , Xn (ζ) assigned to every

outcome ζ of a random experiment (see [Pap87], Chapter 4).

Having ¬xed a vector x ∈ Rn , the probability P{X ¤ x} of the random

event {X1 ¤ x1 , . . . , Xn ¤ xn } is given by

x1 xn

P{X ¤ x} = ... f (X1 , . . . , Xn )dX1 . . . dXn

’∞ ’∞

where f (X) = f (X1 , . . . , Xn ) is the probability density of the random vari-

able X ∈ Rn , such that

f (X1 , . . . , Xn ) ≥ 0, f (X1 , . . . , Xn )dX = 1.

Rn

The numerical computation of the mean value µ(f ) is carried out by taking

N independent samples x1 , . . . , xN ∈ Rn with probability density |„¦|’1 χ„¦

and evaluating their average

N

1

fN = f (xi ) = IN (f ). (9.60)

N i=1

408 9. Numerical Integration

From a statistical standpoint, the samples x1 , . . . , xN can be regarded as

the realizations of a sequence of N random variables {X1 , . . . , XN }, mu-

tually independent and each with probability density |„¦|’1 χ„¦ .

For such a sequence the strong law of large numbers ensures with prob-

N

ability 1 the convergence of the average IN (f ) = i=1 f (Xi ) /N to

the mean value µ(f ) as N ’ ∞. In computational practice the sequence

of samples x1 , . . . , xN is deterministically produced by a random-number

generator, giving rise to the so-called pseudo-random integration formulae.

The quadrature error EN (f ) = µ(f ) ’ IN (f ) as a function of N can be

characterized through the variance

2

µ (IN (f ) ’ µ(f )) .

σ(IN (f )) =

Interpreting again f as a function of the random variable X, distributed

with uniform probability density |„¦|’1 in „¦ ⊆ Rn and variance σ(f ), we

have

σ(f )

σ(IN (f )) = √ , (9.61)

N

from which, as N ’ ∞, a convergence rate of O(N ’1/2 ) follows for the

statistical estimate of the error σ(IN (f )). Such convergence rate does not

depend on the dimension n of the integration domain, and this is a most

relevant feature of the Monte Carlo method. However, it is worth noting

that the convergence rate is independent of the regularity of f ; thus, un-

like interpolatory quadratures, Monte Carlo methods do not yield more

accurate results when dealing with smooth integrands.

The estimate (9.61) is extremely weak and in practice one does often

obtain poorly accurate results. A more e¬cient implementation of Monte

Carlo methods is based on composite approach or semi-analytical methods;

an example of these techniques is provided in [ NAG95], where a composite

Monte Carlo method is employed for the computation of integrals over

hypercubes in Rn .

9.10 Applications

We consider in the next sections the computation of two integrals suggested

by applications in geometry and the mechanics of rigid bodies.

9.10.1 Computation of an Ellipsoid Surface

Let E be the ellipsoid obtained by rotating the ellipse in Figure 9.6 around

the x axis, where the radius ρ is described as a function of the axial coor-

9.10 Applications 409

E

ρ(x )

x

- 1/ β 1/ β

FIGURE 9.6. Section of the ellipsoid

dinate by the equation

1 1

ρ2 (x) = ±2 (1 ’ β 2 x2 ), ’ ¤x¤ ,

β β

± and β being given constants, assigned in such a way that ±2 β 2 < 1.

√

We set the following values for the parameters: ±2 = (3 ’ 2 2)/100 and

√

β 2 = 100. Letting K 2 = β 2 1 ’ ±2 β 2 , f (x) = 1 ’ K 2 x2 and θ =

cos’1 (K/β), the computation of the surface of E requires evaluating the

integral

1/β

2π±

[(π/2 ’ θ) + sin(2θ)/2] .

I(f ) = 4π± f (x)dx = (9.62)

K

0

Notice that f (1/β) = ’100; this prompts us to use a numerical adaptive

formula able to provide a nonuniform distribution of quadrature nodes,

with a possible re¬nement of these nodes around x = 1/β.

Table 9.12 summarizes the results obtained using the composite midpoint,

trapezoidal and Cavalieri-Simpson rules (respectively denoted by (MP),

(TR) and (CS)), which are compared with Romberg integration (RO) and

with the adaptive Cavalieri-Simpson quadrature introduced in Section 9.7.2

and denoted by (AD).

In the table, m is the number of subintervals, while Err and flops denote

the absolute quadrature error and the number of ¬‚oating-point operations

required by each algorithm, respectively. In the case of the AD method,

we have run Program 77 taking hmin=10’5 and tol=10’8 , while for the

Romberg method we have used Program 76 with n=9.

The results demonstrate the advantage of using the composite adaptive

Cavalieri-Simpson formula, both in terms of computational e¬ciency and

accuracy, as can be seen in the plots in Figure 9.7 which allow to check

the successful working of the adaptivity procedure. In Figure 9.7 (left),

we show, together with the graph of f , the nonuniform distribution of

the quadrature nodes on the x axis, while in Figure 9.7 (right) we plot

the logarithmic graph of the integration step density (piecewise constant)

∆h (x), de¬ned as the inverse of the value of the stepsize h on each active

interval A (see Section 9.7.2).

410 9. Numerical Integration

Notice the high value of ∆h at x = 1/β, where the derivative of the

integrand function is maximum.

(PM) (TR) (CS) (RO) (AD)

m 4000 5600 250 50

3.24e ’ 10 3.30e ’ 10 2.98e ’ 10 3.58e ’ 11 3.18e ’ 10

Err

20007 29013 2519 5772 3540

flops

√

1/β

1 ’ K 2 x2 dx, with

TABLE 9.12. Methods for approximating I(f ) = 4π±

√ 0

±2 = (3 ’ 2 2)/100, β = 10 and K 2 = β 2 (1 ’ ±2 β 2 )

5

10

1

0.8

4

10

0.6

0.4

3

10

0.2

0

2

10

0.1

0.06 0.08

0.04

0 0.02

’0.2

0.02 0.04 0.06 0.1

0.08

0

FIGURE 9.7. Distribution of quadrature nodes (left); integration stepsize density

in the approximation of integral (9.62) (right)

9.10.2 Computation of the Wind Action on a Sailboat Mast

Let us consider the sailboat schematically drawn in Figure 9.8 (left) and

subject to the action of the wind force. The mast, of length L, is denoted by

the straight line AB, while one of the two shrouds (strings for the side sti¬-

ening of the mast) is represented by the straight line BO. Any in¬nitesimal

element of the sail transmits to the corresponding element of length dx of

the mast a force of magnitude equal to f (x)dx. The change of f along with

the height x, measured from the point A (basis of the mast), is expressed

by the following law

±x ’γx

f (x) = e ,

x+β

where ±, β and γ are given constants.

9.10 Applications 411

The resultant R of the force f is de¬ned as

L

f (x)dx ≡ I(f ),

R= (9.63)

0

and is applied at a point at distance equal to b (to be determined) from

the basis of the mast.

B

B

mast

shroud

wind

1111111111

0000000000

direction

0000000000

1111111111 L

R

1111111111

0000000000 f dx

T

dx

T

b

M

A

O

O A

H

V

FIGURE 9.8. Schematic representation of a sailboat (left); forces acting on the

mast (right)

Computing R and the distance b, given by b = I(xf )/I(f ), is crucial for the

structural design of the mast and shroud sections. Indeed, once the values

of R and b are known, it is possible to analyze the hyperstatic structure

mast-shroud (using for instance the method of forces), thus allowing for the

computation of the reactions V , H and M at the basis of the mast and the

traction T that is transmitted by the shroud, and are drawn in Figure 9.8

(right). Then, the internal actions in the structure can be found, as well as

the maximum stresses arising in the mast AB and in the shroud BO, from

which, assuming that the safety veri¬cations are satis¬ed, one can ¬nally

design the geometrical parameters of the sections of AB and BO.

For the approximate computation of R we have considered the compos-

ite midpoint, trapezoidal and Cavalieri-Simpson rules, denoted henceforth

by (MP), (TR) and (CS), and, for a comparison, the adaptive Cavalieri-

Simpson quadrature formula introduced in Section 9.7.2 and denoted by

(AD). Since a closed-form expression for the integral (9.63) is not available,

the composite rules have been applied taking mk = 2k uniform partitions

of [0, L], with k = 0, . . . , 15.

We have assumed in the numerical experiments ± = 50, β = 5/3 and

γ = 1/4 and we have run Program 77 taking tol=10’4 and hmin=10’3 .

The sequence of integrals computed using the composite formulae has been

stopped at k = 12 (corresponding to mk = 212 = 4096) since the remaining

412 9. Numerical Integration

0

10

’1

10

’2

10

’3

(TR)

10

’4

10 (PM)

’5

10

’6

10

(CS)

’7

10

’8

10 (AD)

’9

10

0 20 40 60 80 100 120

FIGURE 9.9. Relative errors in the approximate computation of the integral

(±xe’γx )/(x + β)dx

L

0

elements, in the case of formula CS, di¬er among them only up to the last

signi¬cant ¬gure. Therefore, we have assumed as the exact value of I(f )

(CS)

the outcome I12 = 100.0613683179612 provided by formula CS.

(CS)

We report in Figure 9.9 the logarithmic plots of the relative error |I12 ’

Ik |/I12 , for k = 0, . . . , 7, Ik being the generic element of the sequence for

the three considered formulae. As a comparison, we also display the graph

of the relative error in the case of formula AD, applied on a number of

(nonuniform) partitions equivalent to that of the composite rules.

Notice how, for the same number of partitions, formula AD is more accu-

rate, with a relative error of 2.06 · 10’7 obtained using 37 (nonuniform)

partitions of [0, L]. Methods PM and TR achieve a comparable accuracy

employing 2048 and 4096 uniform subintervals, respectively, while formula

CS requires about 64 partitions. The e¬ectiveness of the adaptivity pro-

cedure is demonstrated by the plots in Figure 9.10, which show, together

with the graph of f , the distribution of the quadrature nodes (left) and the

function ∆h (x) (right) that expresses the (piecewise constant) density of

the integration stepsize h, de¬ned as the inverse of the value of h over each

active interval A (see Section 9.7.2).

Notice also the high value of ∆h at x = 0, where the derivatives of f are

maximum.

9.11 Exercises

1. Let E0 (f ) and E1 (f ) be the quadrature errors in (9.6) and (9.12). Prove

that |E1 (f )| 2|E0 (f )|.

2. Check that the error estimates for the midpoint, trapezoidal and Cavalieri-

Simpson formulae, given respectively by (9.6), (9.12) and (9.16), are special

instances of (9.19) or (9.20). In particular, show that M0 = 2/3, K1 =

9.11 Exercises 413

20 30

25

15

20

10

15

5

10

0

5

’5 0