ńņš. 19 |

When s0 is close to 1 and Ļ„0 is small, the discrete wavelets are close to the

continuous ones. For a ļ¬xed scale s0 the localization points of the discrete

wavelets along the scale axis are logarithmic as log s = j log s0 . The choice

s0 = 2 corresponds to the dyadic sampling in frequency. The discrete time-

step is Ļ„0 sj and, typically, Ļ„0 = 1. Hence, the time-sampling step is a

0

function of the scale and along the time axis the localization points of the

wavelet depend on the scale.

For a given function f ā L1 (R), the corresponding discrete wavelet trans-

form is

ā

ĀÆ

Wf (j, k) = f (t)hj,k (t) dt.

ā’ā

It is possible to introduce an orthonormal wavelet basis using discrete di-

lation and traslation factors, i.e.

ā

ĀÆ āi, j, k, l ā Z.

hi,j hk,l (t) dt = Ī“ik Ī“jl,

ā’ā

With an orthogonal wavelet basis, an arbitrary function f can be recon-

structed by the expansion

f (t) = A Wf (j, k)hj,k (t),

j,kāZ

where A is a constant that does not depend on f .

As of the computational standpoint, the wavelet discrete transform can

be implemented at even a cheaper cost than the FFT algorithm for com-

puting the Fourier transform.

10.13 Applications 463

10.13 Applications

In this section we apply the theory of orthogonal polynomials to solve two

problems arising in quantum physics. In the ļ¬rst example we deal with

Gauss-Laguerre quadratures, while in the second case the Fourier analysis

and the FFT are considered.

10.13.1 Numerical Computation of Blackbody Radiation

The monochromatic energy density E(Ī½) of blackbody radiation as a func-

tion of frequency Ī½ is expressed by the following law

Ī½3

8Ļh

E(Ī½) = 3 hĪ½/K T ,

ā’1

ce B

where h is the Planck constant, c is the speed of light, KB is the Boltz-

mann constant and T is the absolute temperature of the blackbody (see,

for instance, [AF83]).

To compute the total density of monochromatic energy that is emitted

by the blackbody (that is, the emitted energy per unit volume) we must

evaluate the integral

ā ā

x3

E(Ī½)dĪ½ = Ī±T 4

E= dx,

ex ā’ 1

0 0

where x = hĪ½/KB T and Ī± = (8ĻKB )/(ch)3 1.16 Ā· 10ā’16 [J][K ā’4 ][mā’3 ].

4

ā

We also let f (x) = x3 /(ex ā’ 1) and I(f ) = 0 f (x)dx.

To approximate I(f ) up to a previously ļ¬xed absolute error ā¤ Ī“, we com-

pare method 1. introduced in Section 9.8.3 with Gauss-Laguerre quadra-

tures.

In the case of method 1. we proceed as follows. For any a > 0 we let

ā

a

I(f ) = 0 f (x)dx + a f (x)dx and try to ļ¬nd a function Ļ such that

ā ā

Ī“

f (x)dx ā¤ Ļ(x)dx ā¤ , (10.89)

2

a a

ā

the integral Ļ(x)dx being āeasyā to compute. Once the value of a

a

has been found such that (10.89) is fulļ¬lled, we compute the integral

a

I1 (f ) = 0 f (x)dx using for instance the adaptive Cavalieri-Simpson for-

mula introduced in Section 9.7.2 and denoted in the following by AD.

A natural choice of a bounding function for f is Ļ(x) = Kx3 eā’x , for a

suitable constant K > 1. Thus, we have K ā„ ex /(ex ā’ 1), for any x > 0,

that is, letting x = a, K = ea /(ea ā’1). Substituting back into (10.89) yields

ā

a3 + 3a2 + 6a + 6 Ī“

f (x)dx ā¤ = Ī·(a) ā¤ .

ea ā’ 1 2

a

464 10. Orthogonal Polynomials in Approximation Theory

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

0

ā’0.2

0 2 4 6 8 10 12 14 16

FIGURE 10.10. Distribution of quadrature nodes and graph of the integrand

function

Letting Ī“ = 10ā’3 , we see that (10.89) is satisļ¬ed by taking a 16. Pro-

gram 77 for computing I1 (f ) with the AD method, setting hmin=10ā’3 and

tol=5 Ā· 10ā’4 , yields the approximate value I1 6.4934 with a number of

(nonuniform) partitions equal to 25.

The distribution of the quadrature nodes produced by the adaptive algo-

rithm is plotted in Figure 10.10. Globally, using method 1. yields an approx-

imation of I(f ) equal to 6.4984. Table 10.1 shows, for sake of comparison,

some approximate values of I(f ) obtained using the Gauss-Laguerre formu-

lae with the number of nodes varying between 2 to 20. Notice that, taking

n = 4 nodes, the accuracy of the two computational procedures is roughly

equivalent.

n In (f )

2 6.413727469517582

3 6.481130171540022

4 6.494535639802632

5 6.494313365790864

10 6.493939967652101

15 6.493939402671590

20 6.493939402219742

ā

x3 /(ex ā’ 1)dx with

TABLE 10.1. Approximate evaluation of I(f ) = 0

Gauss-Laguerre quadratures

10.13.2 Numerical Solution of SchrĀØdinger Equation

o

Let us consider the following diļ¬erential equation arising in quantum me-

chanics known as the SchrĀØdinger equation

o

ā‚2Ļ

ā‚Ļ

=ā’ xāR

i , t > 0. (10.90)

2m ā‚x2

ā‚t

The symbols i and denote the imaginary unit and the reduced Planck

constant, respectively. The complex-valued function Ļ = Ļ(x, t), the solu-

10.13 Applications 465

tion of (10.90), is called a wave function and the quantity |Ļ(x, t)|2 deļ¬nes

the probability density in the space x of a free electron of mass m at time

t (see [FRL55]).

The corresponding Cauchy problem may represent a physical model for

describing the motion of an electron in a cell of an inļ¬nite lattice (for more

details see, e.g., [AF83]).

Consider the initial condition Ļ(x, 0) = w(x), where w is the step func-

ā

tion that takes the value 1/ 2b for |x| ā¤ b and is zero for |x| > b, with

b = a/5, and where 2a represents the inter-ionic distance in the lattice.

Therefore, we are searching for periodic solutions, with period equal to 2a.

Solving problem (10.90) can be carried out using Fourier analysis as

follows. We ļ¬rst write the Fourier series of w and Ļ (for any t > 0)

a

N/2ā’1

1

w(x)eā’iĻkx/a dx,

wk eiĻkx/a ,

w(x) = wk =

2a

k=ā’N/2 ā’a

(10.91)

a

N/2ā’1

1

Ļ(x, t)eā’iĻkx/a dx.

Ļk (t)eiĻkx/a , Ļk (t) =

Ļ(x, t) =

2a

k=ā’N/2 ā’a

Then, we substitute back (10.91) into (10.90), obtaining the following

Cauchy problem for the Fourier coeļ¬cients Ļk , for k = ā’N/2, . . . , N/2 ā’ 1

ļ£± 2

ļ£“

ļ£² Ļ (t) = ā’i kĻ

Ļk (t),

k

2m a (10.92)

ļ£“

ļ£³

Ļk (0) = wk .

The coeļ¬cients {wk } have been computed by regularizing the coeļ¬cients

{wk } of the step function w using the Lanczos smoothing (10.56) in order

to avoid the Gibbs phenomenon arising around the discontinuities of w (see

Section 10.9.1).

After solving (10.92), we ļ¬nally get, recalling (10.91), the following expres-

sion for the wave function

N/2ā’1

wk eā’iEk t/ eiĻkx/a ,

ĻN (x, t) = (10.93)

k=ā’N/2

where the coeļ¬cients Ek = (k 2 Ļ 2 2 )/(2ma2 ) represent, from the physical

standpoint, the energy levels that the electron may assume in its motion

within the potential well.

To compute the coeļ¬cients wk (and, as a consequence, wk ), we have used

the MATLAB intrinsic function fft (see Section 10.9.2), employing N =

ā—¦

26 = 64 points and letting a = 10 A= 10ā’9 [m]. Time analysis has been

carried out up to T = 10 [s], with time steps of 1 [s]; in all the reported

466 10. Orthogonal Polynomials in Approximation Theory

0.35 0.35

0.3 0.3

0.25 0.25

0.2 0.2

0.15 0.15

0.1 0.1

0.05 0.05

0 0

ā’10 ā’5 0 5 10 ā’10 ā’5 0 5 10

FIGURE 10.11. Probability density |Ļ(x, t)|2 at t = 0, 2, 5 [s], corresponding to

a step function as initial datum: solution without ļ¬ltering (left), with Lanczos

ļ¬ltering (right)

ā—¦

graphs, the x-axis is measured in [A], while the y-axes are respectively in

units of 105 [mā’1/2 ] and 1010 [mā’1 ].

In Figure 10.11 we draw the probability density |Ļ(x, t)|2 at t = 0, 2

and 5 [s]. The result obtained without the regularizing procedure above is

shown on the left, while the same calculation with the āļ¬lteringā of the

Fourier coeļ¬cients is reported on the right. The second plot demonstrates

the smoothing eļ¬ect on the solution by the regularization, at the price of

a slight enlargement of the step-like initial probability distribution.

Finally, it is interesting to apply Fourier analysis to solve problem (10.90)

starting from a smooth initial datum. For this, we choose an initial probabil-

ity density w of Gaussian form such that w 2 = 1. The solution |Ļ(x, t)|2 ,

this time computed without regularization, is shown in Figure 10.12, at

t = 0, 2, 5, 7, 9[s]. Notice the absence of spurious oscillations with respect

to the previous case.

0.2

0.18

0.16

0.14

0.12

0.1

0.08

0.06

0.04

0.02

0

ā’10 ā’5 0 5 10

FIGURE 10.12. Probability density |Ļ(x, t)|2 at t = 0, 2, 5, 7, 9[s], corresponding

to an initial datum with Gaussian form

10.14 Exercises 467

10.14 Exercises

1. Prove the three-term relation (10.11).

[Hint: set x = cos(Īø), for 0 ā¤ Īø ā¤ Ļ.]

2. Prove (10.31).

[Hint: ļ¬rst prove that vn n = (vn , vn )1/2 , Tk n = Tk w for k < n and

Tn 2 = 2 Tn 2 (see [QV94], formula (4.3.16)). Then, the thesis follows

n w

from (10.29) multiplying by Tl (l = k) and taking (Ā·, Ā·)n .]

3. Prove (10.24) after showing that (f ā’ Ī GL f ) ā¤ Cn1ā’s f s,Ļ .

Ļ

n

[Hint: use the Gagliardo-Nirenberg inequality

max |f (x)| ā¤ f 1/2 1/2

f

ā’1ā¤xā¤1

valid for any f ā L2 with f ā L2 . Next, use the relation that has been just

shown to prove (10.24).]

1/2

is a norm for Pn .

4. Prove that the discrete seminorm f = (f, f )n

n

5. Compute weights and nodes of the following quadrature formulae

b n

w(x)f (x)dx = Ļi f (xi ),

i=0

a

in such a way that the order is maximum, setting

ā

Ļ(x) = x, a = 0, b = 1, n = 1;

a = ā’1,

Ļ(x) = 2x2 + 1, b = 1, n = 0;

2 if 0 < x ā¤ 1,

a = ā’1,

Ļ(x) = b = 1, n = 1.

1 if ā’ 1 ā¤ x ā¤ 0

ā

[Solution: for Ļ(x) = x, the nodes x1 = 5 + 2 10/7, x2 = 5 ā’ 2 10/7

9 9 9 9

are obtained, from which the weights can be computed (order 3); for Ļ(x) =

2x2 + 1, we get x1ā 3/5 and Ļ1 = 5/3 (order 1); for Ļ(x) = 2x2 + 1, we

= ā

have x1 = 22 + 22 155, x2 = 22 ā’ 22 155 (order 3).]

1 1 1 1

6. Prove (10.40).

ā—

[Hint: notice that (Ī GL f, Lj )n = fk (Lk , Lj )n = . . . , distinguishing the

n k

case j < n from the case j = n.]

7. Show that ||| Ā· |||, deļ¬ned in (10.45), is an essentially strict seminorm.

[Solution : use the Cauchy-Schwarz inequality (1.14) to check that the

triangular inequality is satisļ¬ed. This proves that ||| Ā· ||| is a seminorm. The

second part of the exercise follows after a direct computation.]

8. Consider in an interval [a, b] the nodes

bā’a

1

xj = a + j ā’ j = 1, 2, . . . , m

2 m

468 10. Orthogonal Polynomials in Approximation Theory

for m ā„ 1. They are the midpoints of m equally spaced intervals in [a, b].

Let f be a given function; prove that the least-squares polynomial rn with

respect to the weight w(x) = 1 minimizes the error average, deļ¬ned as

1/2

m

1

[f (xj ) ā’ rn (xj )]2

E = lim .

m j=1

mā’ā

9. Consider the function

1 2

n

f (x) ā’ j

F (a0 , a1 , . . . , an ) = aj x dx

j=0

0

and determine the coeļ¬cients a0 , a1 , . . . , an in such a way that F is mini-

mized. Which kind of linear system is obtained?

[Hint: enforce the conditions ā‚F/ā‚ai = 0 with i = 0, 1, . . . , n. The matrix

of the ļ¬nal linear system is the Hilbert matrix (see Example 3.2, Chapter

3) which is strongly ill-conditioned.]

11

Numerical Solution of Ordinary

Diļ¬erential Equations

In this chapter we deal with the numerical solutions of the Cauchy problem

for ordinary diļ¬erential equations (henceforth abbreviated by ODEs). After

a brief review of basic notions about ODEs, we introduce the most widely

used techniques for the numerical approximation of scalar equations. The

concepts of consistency, convergence, zero-stability and absolute stability

will be addressed. Then, we extend our analysis to systems of ODEs, with

emphasis on stiļ¬ problems.

11.1 The Cauchy Problem

The Cauchy problem (also known as the initial-value problem) consists of

ļ¬nding the solution of an ODE, in the scalar or vector case, given suitable

initial conditions. In particular, in the scalar case, denoting by I an interval

of R containing the point t0 , the Cauchy problem associated with a ļ¬rst

order ODE reads:

ļ¬nd a real-valued function y ā C 1 (I), such that

t ā I,

y (t) = f (t, y(t)),

(11.1)

y(t0 ) = y0 ,

where f (t, y) is a given real-valued function in the strip S = I Ć—(ā’ā, +ā),

which is continuous with respect to both variables. Should f depend on t

only through y, the diļ¬erential equation is called autonomous.

470 11. Numerical Solution of Ordinary Diļ¬erential Equations

Most of our analysis will be concerned with one single diļ¬erential equa-

tion (scalar case). The extension to the case of systems of ļ¬rst-order ODEs

will be addressed in Section 11.9.

If f is continuous with respect to t, then the solution to (11.1) satisļ¬es

t

y(t) ā’ y0 = f (Ļ„, y(Ļ„ ))dĻ„. (11.2)

t0

Conversely, if y is deļ¬ned by (11.2), then it is continuous in I and y(t0 ) =

y0 . Moreover, since y is a primitive of the continuous function f (Ā·, y(Ā·)),

y ā C 1 (I) and satisļ¬es the diļ¬erential equation y (t) = f (t, y(t)).

Thus, if f is continuous the Cauchy problem (11.1) is equivalent to the

integral equation (11.2). We shall see later on how to take advantage of

this equivalence in the numerical methods.

Let us now recall two existence and uniqueness results for (11.1).

1. Local existence and uniqueness.

Suppose that f (t, y) is locally Lipschitz continuous at (t0 , y0 ) with

respect to y, that is, there exist two neighborhoods, J ā I of t0 of

width rJ , and Ī£ of y0 of width rĪ£ , and a constant L > 0, such that

|f (t, y1 ) ā’ f (t, y2 )| ā¤ L|y1 ā’ y2 | āt ā J, āy1 , y2 ā Ī£. (11.3)

Then, the Cauchy problem (11.1) admits a unique solution in a neigh-

borhood of t0 with radius r0 with 0 < r0 < min(rJ , rĪ£ /M, 1/L),

where M is the maximum of |f (t, y)| on J Ć— Ī£. This solution is called

the local solution.

Notice that condition (11.3) is automatically satisļ¬ed if f has con-

tinuous derivative with respect to y: indeed, in such a case it suļ¬ces

to choose L as the maximum of |ā‚f (t, y)/ā‚y| in J Ć— Ī£.

2. Global existence and uniqueness. The problem admits a unique

global solution if one can take J = I and Ī£ = R in (11.3), that is, if

f is uniformly Lipschitz continuous with respect to y.

In view of the stability analysis of the Cauchy problem, we consider the

following problem

t ā I,

z (t) = f (t, z(t)) + Ī“(t),

(11.4)

z(t0 ) = y0 + Ī“0 ,

where Ī“0 ā R and Ī“ is a continuous function on I. Problem (11.4) is derived

from (11.1) by perturbing both the initial datum y0 and the source func-

tion f . Let us now characterize the sensitivity of the solution z to those

perturbations.

11.1 The Cauchy Problem 471

Deļ¬nition 11.1 ([Hah67], [Ste71] or [PS91]). Let I be a bounded set. The

Cauchy problem (11.1) is stable in the sense of Liapunov (or stable) on I

if, for any perturbation (Ī“0 , Ī“(t)) satisfying

|Ī“0 | < Īµ, |Ī“(t)| < Īµ āt ā I,

with Īµ > 0 suļ¬ciently small to guarantee that the solution to the perturbed

problem (11.4) does exist, then

āC > 0 independent of Īµ such that |y(t) ā’ z(t)| < CĪµ, āt ā I.

(11.5)

If I has no upperly bound we say that (11.1) is asymptotically stable if, as

well as being Liapunov stable in any bounded interval I, the following limit

also holds

|y(t) ā’ z(t)| ā’ 0, for t ā’ +ā. (11.6)

The requirement that the Cauchy problem is stable is equivalent to requir-

ing that it is well-posed in the sense stated in Chapter 2.

The uniform Lipschitz-continuity of f with respect to y suļ¬ces to ensure

the stability of the Cauchy problem. Indeed, letting w(t) = z(t) ā’ y(t), we

have

w (t) = f (t, z(t)) ā’ f (t, y(t)) + Ī“(t).

Therefore,

t t

[f (s, z(s)) ā’ f (s, y(s))] ds + āt ā I.

w(t) = Ī“0 + Ī“(s)ds,

t0 t0

Thanks to previous assumptions, it follows that

t

|w(t)| ā¤ (1 + |t ā’ t0 |) Īµ + L |w(s)|ds.

t0

Applying the Gronwall lemma (which we include below for the readerā™s

ease) yields

|w(t)| ā¤ (1 + |t ā’ t0 |) ĪµeL|tā’t0 | , āt ā I

and, thus, (11.5) with C = (1 + KI )eLKI where KI = maxtāI |t ā’ t0 |.

Lemma 11.1 (Gronwall) Let p be an integrable function nonnegative on

the interval (t0 , t0 + T ), and let g and Ļ• be two continuous functions on

[t0 , t0 + T ], g being nondecreasing. If Ļ• satisļ¬es the inequality

t

Ļ•(t) ā¤ g(t) + āt ā [t0 , t0 + T ],

p(Ļ„ )Ļ•(Ļ„ )dĻ„,

t0

472 11. Numerical Solution of Ordinary Diļ¬erential Equations

then ļ£« ļ£¶

t

Ļ•(t) ā¤ g(t) exp ļ£ p(Ļ„ )dĻ„ ļ£ø, āt ā [t0 , t0 + T ].

t0

For the proof, see, for instance, [QV94], Lemma 1.4.1.

The constant C that appears in (11.5) could be very large and, in general,

depends on the upper extreme of the interval I, as in the proof above.

For that reason, the property of asymptotic stability is more suitable for

describing the behavior of the dynamical system (11.1) as t ā’ +ā (see

[Arn73]).

As is well-known, only a restricted number of nonlinear ODEs can be

solved in closed form (see, for instance, [Arn73]). Moreover, even when

this is possible, it is not always a straightforward task to ļ¬nd an explicit

expression of the solution; for example, consider the (very simple) equation

y = (y ā’ t)/(y + t), whose solution is only implicitly deļ¬ned by the relation

(1/2) log(t2 + y 2 ) + tanā’1 (y/t) = C, where C is a constant depending on

the initial condition.

For this reason we are interested in numerical methods, since these can

be applied to any ODE under the sole condition that it admits a unique

solution.

11.2 One-Step Numerical Methods

Let us address the numerical approximation of the Cauchy problem (11.1).

Fix 0 < T < +ā and let I = (t0 , t0 + T ) be the integration interval and,

correspondingly, for h > 0, let tn = t0 + nh, with n = 0, 1, 2, . . . , Nh , be

the sequence of discretization nodes of I into subintervals In = [tn , tn+1 ].

The width h of such subintervals is called the discretization stepsize. Notice

that Nh is the maximum integer such that tNh ā¤ t0 + T . Let uj be the

approximation at node tj of the exact solution y(tj ); this solution will be

henceforth shortly denoted by yj . Similarly, fj denotes the value f (tj , uj ).

We obviously set u0 = y0 .

Deļ¬nition 11.2 A numerical method for the approximation of problem

(11.1) is called a one-step method if ān ā„ 0, un+1 depends only on un .

Otherwise, the scheme is called a multistep method.

For now, we focus our attention on one-step methods. Here are some of

them:

11.3 Analysis of One-Step Methods 473

1. forward Euler method

un+1 = un + hfn ; (11.7)

2. backward Euler method

un+1 = un + hfn+1 . (11.8)

In both cases, y is approximated through a ļ¬nite diļ¬erence: forward and

backward diļ¬erences are used in (11.7) and (11.8), respectively. Both ļ¬nite

diļ¬erences are ļ¬rst-order approximations of the ļ¬rst derivative of y with

respect to h (see Section 10.10.1).

3. trapezoidal (or Crank-Nicolson) method

h

un+1 = un + [fn + fn+1 ] . (11.9)

2

This method stems from approximating the integral on the right side of

(11.2) by the trapezoidal quadrature rule (9.11).

4. Heun method

h

un+1 = un + [fn + f (tn+1 , un + hfn )]. (11.10)

2

This method can be derived from the trapezoidal method substituting

f (tn+1 , un + hf (tn , un )) for f (tn+1 , un+1 ) in (11.9) (i.e., using the forward

Euler method to compute un+1 ).

In this last case, we notice that the aim is to transform an implicit method

into an explicit one. Addressing this concern, we recall the following.

Deļ¬nition 11.3 (explicit and implicit methods) A method is called

explicit if un+1 can be computed directly in terms of (some of) the previous

values uk , k ā¤ n. A method is said to be implicit if un+1 depends implicitly

on itself through f .

Methods (11.7) and (11.10) are explicit, while (11.8) and (11.9) are implicit.

These latter require at each time step to solving a nonlinear problem if f

depends nonlinearly on the second argument.

A remarkable example of one-step methods are the Runge-Kutta meth-

ods, which will be analyzed in Section 11.8.

11.3 Analysis of One-Step Methods

Any one-step explicit method for the approximation of (11.1) can be cast

in the concise form

0 ā¤ n ā¤ Nh ā’ 1,

un+1 = un + hĪ¦(tn , un , fn ; h), u0 = y0 , (11.11)

474 11. Numerical Solution of Ordinary Diļ¬erential Equations

where Ī¦(Ā·, Ā·, Ā·; Ā·) is called an increment function. Letting as usual yn = y(tn ),

analogously to (11.11) we can write

0 ā¤ n ā¤ Nh ā’ 1, (11.12)

yn+1 = yn + hĪ¦(tn , yn , f (tn , yn ); h) + Īµn+1 ,

where Īµn+1 is the residual arising at the point tn+1 when we pretend that

the exact solution āsatisļ¬esā the numerical scheme. Let us write the residual

as

Īµn+1 = hĻ„n+1 (h).

The quantity Ļ„n+1 (h) is called the local truncation error (LTE) at the node

tn+1 . We thus deļ¬ne the global truncation error to be the quantity

|Ļ„n+1 (h)|

Ļ„ (h) = max

0ā¤nā¤Nh ā’1

Notice that Ļ„ (h) depends on the solution y of the Cauchy problem (11.1).

The forward Eulerā™s method is a special instance of (11.11), where

Ī¦(tn , un , fn ; h) = fn ,

while to recover Heunā™s method we must set

1

Ī¦(tn , un , fn ; h) = [fn + f (tn + h, un + hfn )] .

2

A one-step explicit scheme is fully characterized by its increment function

Ī¦. This function, in all the cases considered thus far, is such that

ātn ā„ t0

lim Ī¦(tn , yn , f (tn , yn ); h) = f (tn , yn ), (11.13)

hā’0

Property (11.13), together with the obvious relation yn+1 ā’ yn = hy (tn ) +

O(h2 ), ān ā„ 0, allows one to obtain from (11.12) that lim Ļ„n (h) = 0,

hā’0

0 ā¤ n ā¤ Nh ā’ 1. In turn, this condition ensures that

lim Ļ„ (h) = 0

hā’0

which expresses the consistency of the numerical method (11.11) with the

Cauchy problem (11.1). In general, a method is said to be consistent if its

LTE is inļ¬nitesimal with respect to h. Moreover, a scheme has order p if,

āt ā I, the solution y(t) of the Cauchy problem (11.1) fulļ¬lls the condition

Ļ„ (h) = O(hp ) for h ā’ 0. (11.14)

Using Taylor expansions, as was done in Section 11.2, it can be proved that

the forward Euler method has order 1, while the Heun method has order 2

(see Exercises 1 and 2).

11.3 Analysis of One-Step Methods 475

11.3.1 The Zero-Stability

Let us formulate a requirement analogous to the one for Liapunov stability

(11.5), speciļ¬cally for the numerical scheme. If (11.5) is satisļ¬ed with a

constant C independent of h, we shall say that the numerical problem is

zero-stable. Precisely:

Deļ¬nition 11.4 (zero-stability of one-step methods) The numerical

method (11.11) for the approximation of problem (11.1) is zero-stable if

āh0 > 0, āC > 0 : āh ā (0, h0 ], |zn ā’ u(h) | ā¤ CĪµ, 0 ā¤ n ā¤ Nh , (11.15)

(h)

n

(h) (h)

where zn , un are respectively the solutions of the problems

ļ£±

ļ£² z (h) = zn + h Ī¦(tn , zn , f (tn , zn ); h) + Ī“n+1 ,

(h) (h) (h)

n+1

(11.16)

ļ£³

z0 = y0 + Ī“0 ,

ļ£±

ļ£² u(h) = u(h) + hĪ¦(tn , u(h) , f (tn , u(h) ); h),

n n n

n+1

(11.17)

ļ£³ u =y ,

0 0

for 0 ā¤ n ā¤ Nh ā’ 1, under the assumption that |Ī“k | ā¤ Īµ, 0 ā¤ k ā¤ Nh .

Zero-stability thus requires that, in a bounded interval, (11.15) holds for

any value h ā¤ h0 . This property deals, in particular, with the behavior

of the numerical method in the limit case h ā’ 0 and this justiļ¬es the

name of zero-stability. This latter is therefore a distinguishing property of

the numerical method itself, not of the Cauchy problem (which, indeed,

is stable due to the uniform Lipschitz continuity of f ). Property (11.15)

ensures that the numerical method has a weak sensitivity with respect to

small changes in the data and is thus stable in the sense of the general

deļ¬nition given in Chapter 2.

Remark 11.1 The constant C in (11.15) is independent of h (and thus

of Nh ), but it can depend on the width T of the integration interval I.

Actually, (11.15) does not exclude a priori the constant C from being an

unbounded function of T .

The request that a numerical method be stable arises, before anything else,

from the need of keeping under control the (unavoidable) errors introduced

by the ļ¬nite arithmetic of the computer. Indeed, if the numerical method

were not zero-stable, the rounding errors made on y0 as well as in the pro-

cess of computing f (tn , un ) would make the computed solution completely

useless.

476 11. Numerical Solution of Ordinary Diļ¬erential Equations

Theorem 11.1 (Zero-stability) Consider the explicit one-step method

(11.11) for the numerical solution of the Cauchy problem (11.1). Assume

that the increment function Ī¦ is Lipschitz continuous with respect to the

second argument, with constant Ī independent of h and of the nodes tj ā

[t0 , t0 + T ], that is

āh0 > 0, āĪ > 0 : āh ā (0, h0 ]

(h) (h) (h) (h)

|Ī¦(tn , un , f (tn , un ); h) ā’ Ī¦(tn , zn , f (tn , zn ); h)| (11.18)

(h) (h)

ā¤ Ī|un ā’ zn |, 0 ā¤ n ā¤ Nh .

Then, method (11.11) is zero-stable.

Proof. Setting wj = zj ā’u(h) , by subtracting (11.17) from (11.16) we obtain,

(h) (h)

j

for j = 0, . . . , Nh ā’ 1,

(h) (h) (h) (h) (h) (h)

+ h Ī¦(tj , zj , f (tj , zj ); h) ā’ Ī¦(tj , uj , f (tj , uj ); h) + hĪ“j+1 .

wj+1 = wj

Summing over j gives, for n = 1, . . . , Nh ,

(h) (h)

wn = w0

nā’1 nā’1

(h) (h) (h) (h)

Ī¦(tj , zj , f (tj , zj ); h) ā’ Ī¦(tj , uj , f (tj , uj ); h) ,

+h Ī“j+1 + h

j=0 j=0

so that, by (11.18)

nā’1 nā’1

(h)

|wn | ā¤ |w0 | + h |Ī“j+1 | + hĪ |wj |, 1 ā¤ n ā¤ Nh .

(h)

(11.19)

j=0 j=0

Applying the discrete Gronwall lemma, given below, we obtain

|wn | ā¤ (1 + hn) ĪµenhĪ , 1 ā¤ n ā¤ Nh .

(h)

Then (11.15) follows from noticing that hn ā¤ T and setting C = (1 + T ) eĪT . 3

Notice that zero-stability implies the boundedness of the solution when f

is linear with respect to the second argument.

Lemma 11.2 (discrete Gronwall) Let kn be a nonnegative sequence and

Ļ•n a sequence such that

ļ£±

ļ£“ Ļ•0 ā¤ g0

ļ£“

ļ£²

nā’1 nā’1

ļ£“ Ļ•n ā¤ g0 + n ā„ 1.

ļ£“ ps + ks Ļs ,

ļ£³

s=0 s=0

11.3 Analysis of One-Step Methods 477

If g0 ā„ 0 and pn ā„ 0 for any n ā„ 0, then

nā’1 nā’1

Ļ•n ā¤ n ā„ 1.

g0 + ps exp ks ,

s=0 s=0

For the proof, see, for instance, [QV94], Lemma 1.4.2. In the speciļ¬c case

of the Euler method, checking the property of zero-stability can be done

directly using the Lipschitz continuity of f (we refer the reader to the end

of Section 11.3.2). In the case of multistep methods, the analysis will lead to

the veriļ¬cation of a purely algebraic property, the so-called root condition

(see Section 11.6.3).

11.3.2 Convergence Analysis

Deļ¬nition 11.5 A method is said to be convergent if

ān = 0, . . . , Nh , |un ā’ yn | ā¤ C(h)

where C(h) is an inļ¬nitesimal with respect to h. In that case, it is said to

be convergent with order p if āC > 0 such that C(h) = Chp .

We can prove the following theorem.

Theorem 11.2 (Convergence) Under the same assumptions as in The-

orem 11.1, we have

|yn ā’ un | ā¤ (|y0 ā’ u0 | + nhĻ„ (h)) enhĪ , 1 ā¤ n ā¤ Nh . (11.20)

Therefore, if the consistency assumption (11.13) holds and |y0 ā’ u0 | ā’ 0

as h ā’ 0, then the method is convergent. Moreover, if |y0 ā’ u0 | = O(hp )

and the method has order p, then it is also convergent with order p.

Proof. Setting wj = yj ā’ uj , subtracting (11.11) from (11.12) and proceed-

ing as in the proof of the previous theorem yields inequality (11.19), with the

understanding that

w0 = y0 ā’ u0 , and Ī“j+1 = Ļ„j+1 (h).

The estimate (11.20) is then obtained by applying again the discrete Gronwall

lemma. From the fact that nh ā¤ T and Ļ„ (h) = O(hp ), we can conclude that

|yn ā’ un | ā¤ Chp with C depending on T and Ī but not on h. 3

A consistent and zero-stable method is thus convergent. This property is

known as the Lax-Richtmyer theorem or equivalence theorem (the converse:

āa convergent method is zero-stableā being obviously true). This theorem,

which is proven in [IK66], was already advocated in Section 2.2.1 and is a

central result in the analysis of numerical methods for ODEs (see [Dah56] or

478 11. Numerical Solution of Ordinary Diļ¬erential Equations

[Hen62] for linear multistep methods, [But66], [MNS74] for a wider classes

of methods). It will be considered again in Section 11.5 for the analysis of

multistep methods.

We carry out in detail the convergence analysis in the case of the forward

Euler method, without resorting to the discrete Gronwall lemma. In the

ļ¬rst part of the proof we assume that any operation is performed in exact

arithmetic and that u0 = y0 .

Denote by en+1 = yn+1 ā’ un+1 the error at node tn+1 with n = 0, 1, . . .

and notice that

en+1 = (yn+1 ā’ uā— ) + (uā— ā’ un+1 ), (11.21)

n+1 n+1

where uā—n+1 = yn + hf (tn , yn ) is the solution obtained after one step of

the forward Euler method starting from the initial datum yn (see Figure

11.1). The ļ¬rst addendum in (11.21) accounts for the consistency error, the

second one for the cumulation of these errors. Then

yn+1 ā’ uā— = hĻ„n+1 (h), uā— ā’ un+1 = en + h [f (tn , yn ) ā’ f (tn , un )] .

n+1 n+1

yn+1

hĻ„n+1

y(x)

uā— en+1

n+1

yn

un+1

un

tn+1

tn

FIGURE 11.1. Geometrical interpretation of the local and global truncation er-

rors at node tn+1 for the forward Euler method

As a consequence,

|en+1 | ā¤ h|Ļ„n+1 (h)| + |en | + h|f (tn , yn ) ā’ f (tn , un )| ā¤ hĻ„ (h) + (1 + hL)|en |,

L being the Lipschitz constant of f . By recursion on n, we ļ¬nd

|en+1 | ā¤ [1 + (1 + hL) + . . . + (1 + hL)n ] hĻ„ (h)

eL(tn+1 ā’t0 ) ā’ 1

(1 + hL)n+1 ā’ 1

Ļ„ (h) ā¤

= Ļ„ (h).

L L

11.3 Analysis of One-Step Methods 479

The last inequality follows from noticing that 1 + hL ā¤ ehL and (n + 1)h =

tn+1 ā’ t0 .

On the other hand, if y ā C 2 (I), the LTE for the forward Euler method

is (see Section 10.10.1)

h

y (Ī¾), Ī¾ ā (tn , tn+1 ),

Ļ„n+1 (h) =

2

and thus, Ļ„ (h) ā¤ (M/2)h, where M = maxĪ¾āI |y (Ī¾)|. In conclusion,

eL(tn+1 ā’t0 ) ā’ 1 M

|en+1 | ā¤ h, ān ā„ 0, (11.22)

L 2

from which it follows that the global error tends to zero with the same

order as the local truncation error.

If also the rounding errors are accounted for, we can assume that the

solution un+1 , actually computed by the forward Euler method at time

ĀÆ

tn+1 , is such that

u0 = y0 + Ī¶0 , un+1 = un + hf (tn , un ) + Ī¶n+1 ,

ĀÆ ĀÆ ĀÆ ĀÆ (11.23)

having denoted the rounding error by Ī¶j , for j ā„ 0.

Problem (11.23) is an instance of (11.16), provided that we identify Ī¶n+1

(h)

and un with hĪ“n+1 and zn in (11.16), respectively. Combining Theorems

ĀÆ

11.1 and 11.2 we get, instead of (11.22), the following error estimate

1 M Ī¶

|yn+1 ā’ un+1 | ā¤ eL(tn+1 ā’t0 ) |Ī¶0 | +

ĀÆ h+ ,

L 2 h

where Ī¶ = max1ā¤jā¤n+1 |Ī¶j |. The presence of rounding errors does not allow,

therefore, to conclude that as h ā’ 0, the error goes to zero. Actually,

there exists an optimal (non null) value of h, hopt , for which the error is

minimized. For h < hopt , the rounding error dominates the truncation error

and the global error increases.

11.3.3 The Absolute Stability

The property of absolute stability is in some way specular to zero-stability,

as far as the roles played by h and I are concerned. Heuristically, we say that

a numerical method is absolutely stable if, for h ļ¬xed, un remains bounded

as tn ā’ +ā. This property, thus, deals with the asymptotic behavior of

un , as opposed to a zero-stable method for which, for a ļ¬xed integration

interval, un remains bounded as h ā’ 0.

For a precise deļ¬nition, consider the linear Cauchy problem (that from now

on, we shall refer to as the test problem)

y (t) = Ī»y(t), t > 0,

(11.24)

y(0) = 1,

480 11. Numerical Solution of Ordinary Diļ¬erential Equations

with Ī» ā C, whose solution is y(t) = eĪ»t . Notice that lim |y(t)| = 0 if

tā’+ā

Re(Ī») < 0.

Deļ¬nition 11.6 A numerical method for approximating (11.24) is abso-

lutely stable if

|un | ā’ā’ 0 tn ā’ā’ +ā.

as (11.25)

Let h be the discretization stepsize. The numerical solution un of (11.24)

obviously depends on h and Ī». The region of absolute stability of the nu-

merical method is the subset of the complex plane

A = {z = hĪ» ā C : (11.25) is satisļ¬ed } . (11.26)

Thus, A is the set of the values of the product hĪ» for which the numerical

method furnishes solutions that decay to zero as tn tends to inļ¬nity.

Let us check whether the one-step methods introduced previously are ab-

solutely stable.

1. Forward Euler method: applying (11.7) to problem (11.24) yields un+1 =

un + hĪ»un for n ā„ 0, with u0 = 1. Proceeding recursively on n we get

n ā„ 0.

un = (1 + hĪ»)n ,

Therefore, condition (11.25) is satisļ¬ed iļ¬ |1 + hĪ»| < 1, that is, if hĪ» lies

within the unit circle with center at (ā’1, 0) (see Figure 11.3). This amounts

to requiring that

2Re(Ī»)

hĪ» ā Cā’ and 0 < h < ā’ (11.27)

|Ī»|2

where

Cā’ = {z ā C : Re(z) < 0} .

Example 11.1 For the Cauchy problem y (x) = ā’5y(x) for x > 0 and y(0) = 1,

condition (11.27) implies 0 < h < 2/5. Figure 11.2 (left) shows the behavior of

the computed solution for two values of h which do not fulļ¬ll this condition, while

on the right we show the solutions for two values of h that do. Notice that in this

ā¢

second case the oscillations, if present, damp out as t grows.

2. Backward Euler method: proceeding as before, we get this time

1

n ā„ 0.

un = ,

(1 ā’ hĪ»)n

The absolute stability property (11.25) is satisļ¬ed for any value of hĪ» that

does not belong to the unit circle of center (1, 0) (see Figure 11.3, right).

11.3 Analysis of One-Step Methods 481

3

1

0.8

2

0.6

0.4

1

0.2

0 0

ā’0.2

ā’1

ā’0.4

ā’0.6

ā’2

ā’0.8

ā’3 ā’1

0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8

FIGURE 11.2. Left: computed solutions for h = 0.41 > 2/5 (dashed line) and

h = 2/5 (solid line). Notice how, in the limiting case h = 2/5, the oscillations

remain unmodiļ¬ed as t grows. Right: two solutions are reported for h = 0.39

(solid line) and h = 0.15 (dashed line)

Example 11.2 The numerical solution given by the backward Euler method in

the case of Example 11.1 does not exhibit any oscillation for any value of h. On

the other hand, the same method, if applied to the problem y (t) = 5y(t) for t > 0

and with y(0) = 1, computes a solution that decays anyway to zero as t ā’ ā if

h > 2/5, despite the fact that the exact solution of the Cauchy problem tends to

ā¢

inļ¬nity.

3. Trapezoidal (or Crank-Nicolson) method: we get

n

1 1

1 + Ī»h / 1 ā’ Ī»h n ā„ 0,

un = ,

2 2

hence (11.25) is fulļ¬lled for any hĪ» ā Cā’ .

4. Heunā™s method: applying (11.10) to problem (11.24) and proceeding

by recursion on n, we obtain

n

(hĪ»)2

n ā„ 0.

un = 1 + hĪ» + ,

2

As shown in Figure 11.3 the region of absolute stability of Heunā™s method

is larger than the corresponding one of Eulerā™s method. However, its re-

striction to the real axis is the same.

We say that a method is A-stable if A ā© Cā’ = Cā’ , i.e., if for Re(Ī») < 0,

condition (11.25) is satisļ¬ed for all values of h.

The backward Euler and Crank-Nicolson methods are A-stable, while

the forward Euler and Heun methods are conditionally stable.

Remark 11.2 Notice that the implicit one-step methods examined so far

are unconditionally absolutely stable, while explicit schemes are condition-

482 11. Numerical Solution of Ordinary Diļ¬erential Equations

Im

1.75

H

BE

FE

ā’1 Re

1

ā’1.75

FIGURE 11.3. Regions of absolute stability for the forward (FE) and backward

Euler (BE) methods and for Heunā™s method (H). Notice that the region of abso-

lute stability of the BE method lies outside the unit circle of center (1, 0) (shaded

area)

ally absolutely stable. This is, however, not a general rule: in fact, there ex-

ist implicit unstable or only conditionally stable schemes. On the contrary,

there are no explicit unconditionally absolutely stable schemes [Wid67].

11.4 Diļ¬erence Equations

For any integer k ā„ 1, an equation of the form

un+k + Ī±kā’1 un+kā’1 + . . . + Ī±0 un = Ļ•n+k , n = 0, 1, . . . (11.28)

is called a linear diļ¬erence equation of order k. The coeļ¬cients Ī±0 = 0,

Ī±1 , . . . , Ī±kā’1 may or may not depend on n. If, for any n, the right side

Ļ•n+k is equal to zero, the equation is said homogeneous, while if the Ī±j s

are independent of n it is called linear diļ¬erence equation with constant

coeļ¬cients.

Diļ¬erence equations arise for instance in the discretization of ordinary dif-

ferential equations. Regarding this, we notice that all the numerical meth-

ods examined so far end up with equations like (11.28). More generally,

equations like (11.28) are encountered when quantities are deļ¬ned through

linear recursive relations. Another relevant application is concerned with

the discretization of boundary value problems (see Chapter 12). For fur-

ther details on the subject, we refer to Chapters 2 and 5 of [BO78] and to

Chapter 6 of [Gau97].

11.4 Diļ¬erence Equations 483

Any sequence {un , n = 0, 1, . . . } of values that satisfy (11.28) is called

a solution to the equation (11.28). Given k initial values u0 , . . . , ukā’1 , it

is always possible to construct a solution of (11.28) by computing (sequen-

tially)

un+k = [Ļ•n+k ā’ (Ī±kā’1 un+kā’1 + . . . + Ī±0 un )], n = 0, 1, . . .

However, our interest is to ļ¬nd an expression of the solution un+k which

depends only on the coeļ¬cients and on the initial values.

We start by considering the homogeneous case with constant coeļ¬cients,

un+k + Ī±kā’1 un+kā’1 + . . . + Ī±0 un = 0, n = 0, 1, . . . (11.29)

and associate with (11.29) the characteristic polynomial Ī ā Pk deļ¬ned as

Ī (r) = rk + Ī±kā’1 rkā’1 + . . . + Ī±1 r + Ī±0 . (11.30)

Denoting its roots by rj , j = 0, . . . , k ā’ 1, any sequence of the form

for j = 0, . . . , k ā’ 1

n

rj , n = 0, 1, . . . , (11.31)

is a solution of (11.29), since

n+k n+kā’1 n

rj + Ī±kā’1 rj + . . . + Ī±0 rj

kā’1

n k n

= rj rj + Ī±kā’1 rj + . . . + Ī±0 = rj Ī (rj ) = 0.

We say that the k sequences deļ¬ned in (11.31) are the fundamental solutions

of the homogeneous equation (11.29). Any sequence of the form

n n n

un = Ī³0 r0 + Ī³1 r1 + . . . + Ī³kā’1 rkā’1 , n = 0, 1, . . . (11.32)

is still a solution to (11.29), since it is a linear equation.

The coeļ¬cients Ī³0 , . . . , Ī³kā’1 can be determined by imposing the k initial

conditions u0 , . . . , ukā’1 . Moreover, it can be proved that if all the roots

of Ī are simple, then all the solutions of (11.29) can be cast in the form

(11.32).

This last statement no longer holds if there are roots of Ī with multi-

plicity greater than 1. If, for a certain j, the root rj has multiplicity m ā„ 2,

in order to obtain a system of fundamental solutions that generate all the

solutions of (11.29), it suļ¬ces to replace the corresponding fundamental

n

solution rj , n = 0, 1, . . . with the m sequences

n n

nmā’1 rj , n = 0, 1, . . . .

n

rj , n = 0, 1, . . . , nrj , n = 0, 1, . . . , . . . ,

More generally, assuming that r0 , . . . , rk are distinct roots of Ī , with mul-

tiplicities equal to m0 , . . . , mk , respectively, we can write the solution of

(11.29) as

mj ā’1

k

Ī³sj ns n

un = rj , n = 0, 1, . . . . (11.33)

s=0

j=0

484 11. Numerical Solution of Ordinary Diļ¬erential Equations

Notice that even in presence of complex conjugate roots one can still obtain

a real solution (see Exercise 3).

Example 11.3 For the diļ¬erence equation un+2 ā’un = 0, we have Ī (r) = r2 ā’1,

then r0 = ā’1 and r1 = 1, therefore the solution is given by un = Ī³00 (ā’1)n + Ī³01 .

In particular, enforcing the initial conditions u0 and u1 gives Ī³00 = (u0 ā’ u1 )/2,

ā¢

Ī³01 = (u0 + u1 )/2.

Example 11.4 For the diļ¬erence equation un+3 ā’ 2un+2 ā’ 7un+1 ā’ 4un = 0,

Ī (r) = r3 ā’ 2r2 ā’ 7r ā’ 4. Its roots are r0 = ā’1 (with multiplicity 2), r1 = 4 and

the solution is un = (Ī³00 + nĪ³10 )(ā’1)n + Ī³01 4n . Enforcing the initial conditions

we can compute the unknown coeļ¬cients as the solution of the following linear

system

ļ£±

ļ£² Ī³00 + Ī³01 = u0 ,

ā’Ī³00 ā’ Ī³10 + 4Ī³01 = u1 ,

ļ£³

Ī³00 + 2Ī³10 + 16Ī³01 = u2

that yields Ī³00 = (24u0 ā’ 2u1 ā’ u2 )/25, Ī³10 = (u2 ā’ 3u1 ā’ 4u0 )/5 and Ī³01 =

ā¢

(2u1 + u0 + u2 )/25.

The expression (11.33) is of little practical use since it does not outline

the dependence of un on the k initial conditions. A more convenient rep-

(n)

resentation is obtained by introducing a new set Ļj , n = 0, 1, . . . of

fundamental solutions that satisfy

(i)

i, j = 0, 1, . . . , k ā’ 1. (11.34)

Ļj = Ī“ij ,

Then, the solution of (11.29) subject to the initial conditions u0 , . . . , ukā’1

is given by

kā’1

(n)

un = uj Ļj , n = 0, 1, . . . . (11.35)

j=0

(n)

The new fundamental solutions Ļj , n = 0, 1, . . . can be represented in

terms of those in (11.31) as follows

kā’1

(n)

for j = 0, . . . , k ā’ 1, n = 0, 1, . . .

n

Ļj = Ī²j,m rm (11.36)

m=0

By requiring (11.34), we obtain the k linear systems

kā’1

i, j = 0, . . . , k ā’ 1,

i

Ī²j,m rm = Ī“ij ,

m=0

whose matrix form is

j = 0, . . . , k ā’ 1.

Rbj = ej , (11.37)

11.4 Diļ¬erence Equations 485

Here ej denotes the unit vector of Rk , R = (rim ) = (rm ) and bj = i

(Ī²j,0 , . . . , Ī²j,kā’1 )T . If all rj s are simple roots of Ī , the matrix R is nonsin-

gular (see Exercise 5).

The general case where Ī has k + 1 distinct roots r0 , . . . , rk with multi-

plicities m0 , . . . , mk respectively, can be dealt with by replacing in (11.36)

rj , n = 0, 1, . . . with rj ns , n = 0, 1, . . . , where j = 0, . . . , k and

n n

s = 0, . . . , mj ā’ 1.

Example 11.5 We consider again the diļ¬erence equation of Example 11.4. Here

we have {r0 , nr0 , r1 , n = 0, 1, . . . } so that the matrix R becomes

n n n

ļ£®0 ļ£¹ļ£® ļ£¹

0

r0 0 r2 1 01

R = ļ£° r0 r0 r2 ļ£» = ļ£° ā’1 ā’1 4 ļ£» .

1 1 1

2 2 2

r0 2r0 r2 1 2 16

Solving the three systems (11.37) yields

24 4 1n

(n)

(ā’1)n ā’ n(ā’1)n +

Ļ0 = 4,

25 5 25

2 3 2n

(n)

= ā’ (ā’1)n ā’ n(ā’1)n +

Ļ1 4,

25 5 25

1 1 1n

(n)

= ā’ (ā’1)n + n(ā’1)n +

Ļ2 4,

25 5 25

(n)

2

from which it can be checked that the solution un = uj Ļ j coincides with

j=0

ā¢

the one already found in Example 11.4.

Now we return to the case of nonconstant coeļ¬cients and consider the

following homogeneous equation

k

un+k + Ī±kā’j (n)un+kā’j = 0, n = 0, 1, . . . . (11.38)

j=1

The goal is to transform it into an ODE by means of a function F , called

the generating function of the equation (11.38). F depends on the real

variable t and is derived as follows. We require that the n-th coeļ¬cient

of the Taylor series of F around t = 0 can be written as Ī³n un , for some

unknown constant Ī³n , so that

ā

Ī³n un tn .

F (t) = (11.39)

n=0

The coeļ¬cients {Ī³n } are unknown and must be determined in such a way

that

ļ£® ļ£¹

ā

k k

ļ£°un+k + Ī±kā’j (n)un+kā’j ļ£» tn ,

(kā’j)

cj F (t) = (11.40)

n=0

j=0 j=1

486 11. Numerical Solution of Ordinary Diļ¬erential Equations

where cj are suitable unknown constants not depending on n. Note that

owing to (11.39) we obtain the ODE

k

cj F (kā’j) (t) = 0

j=0

to which we must add the initial conditions F (j) (0) = Ī³j uj for j = 0, . . . , kā’

1. Once F is available, it is simple to recover un through the deļ¬nition of

F itself.

Example 11.6 Consider the diļ¬erence equation

(n + 2)(n + 1)un+2 ā’ 2(n + 1)un+1 ā’ 3un = 0, n = 0, 1, . . . (11.41)

with the initial conditions u0 = u1 = 2. We look for a generating function of the

form (11.39). By term-to-term derivation of the series, we get

ā ā

Ī³n n(n ā’ 1)un tnā’2 ,

nā’1

F (t) = Ī³n nun t , F (t) =

n=0 n=0

and, after some algebra, we ļ¬nd

ā ā

nā’1

Ī³n+1 (n + 1)un+1 tn ,

F (t) = Ī³n nun t =

n=0 n=0

ā ā

Ī³n n(n ā’ 1)un t nā’2

Ī³n+2 (n + 2)(n + 1)un+2 tn .

F (t) = =

n=0 n=0

As a consequence, (11.40) becomes

ā ā ā

(n + 1)(n + 2)un+2 t ā’ 2 (n + 1)un+1 t ā’ 3

n n

un tn

n=0 n=0 n=0

ā ā ā

n n

Ī³n un tn ,

= c0 Ī³n+2 (n + 2)(n + 1)un+2 t + c1 Ī³n+1 (n + 1)un+1 t + c2

n=0 n=0 n=0

so that, equating both sides, we ļ¬nd

Ī³n = 1 ān ā„ 0, c0 = 1, c1 = ā’2, c2 = ā’3.

We have thus associated with the diļ¬erence equation the following ODE with

constant coeļ¬cients

F (t) ā’ 2F (t) ā’ 3F (t) = 0,

with the initial condition F (0) = F (0) = 2. The n-th coeļ¬cient of the solution

F (t) = e3t + eā’t is

1 (n) 1

[(ā’1)n + 3n ] ,

F (0) =

n! n!

ā¢

so that un = (1/n!) [(ā’1)n + 3n ] is the solution of (11.41).

11.5 Multistep Methods 487

The nonhomogeneous case (11.28) can be tackled by searching for solutions

of the form

un = u(0) + u(Ļ•) ,

n n

(0) (Ļ•)

where un is the solution of the associated homogeneous equation and un

is a particular solution of the nonhomogeneous equation. Once the solution

of the homogeneous equation is available, a general technique to obtain

the solution of the nonhomogeneous equation is based on the method of

variation of parameters, combined with a reduction of the order of the

diļ¬erence equation (see [BO78]).

In the special case of diļ¬erence equations with constant coeļ¬cients, with

Ļ•n of the form cn Q(n), where c is a constant and Q is a polynomial of degree

p with respect to the variable n, a possible approach is that of undetermined

coeļ¬cients, where one looks for a particular solution that depends on some

undetermined constants and has a known form for some classes of right

sides Ļ•n . It suļ¬ces to look for a particular solution of the form

u(Ļ•) = cn (bp np + bpā’1 npā’1 + . . . + b0 ),

n

(Ļ•)

where bp , . . . , b0 are constants to be determined in such a way that un is

actually a solution of (11.28).

Example 11.7 Consider the diļ¬erence equation un+3 ā’un+2 +un+1 ā’un = 2n n2 .

The particular solution is of the form un = 2n (b2 n2 + b1 n + b0 ). Substituting this

solution into the equation, we ļ¬nd 5b2 n2 +(36b2 +5b1 )n+(58b2 +18b1 +5b0 ) = n2 ,

from which, recalling the principle of identity for polynomials, one gets b2 = 1/5,

b1 = ā’36/25 and b0 = 358/125. ā¢

Analogous to the homogeneous case, it is possible to express the solution

of (11.28) as

kā’1 n

(n) (nā’l+kā’1)

un = uj Ļj + Ļ•l Ļkā’1 , n = 0, 1, . . . (11.42)

j=0 l=k

(i)

where we deļ¬ne Ļkā’1 ā” 0 for all i < 0 and Ļ•j ā” 0 for all j < k.

11.5 Multistep Methods

Let us now introduce some examples of multistep methods (shortly, MS).

Deļ¬nition 11.7 (q-steps methods) A q-step method (q ā„ 1) is one

which, ān ā„ q ā’ 1, un+1 depends on un+1ā’q , but not on the values uk

with k < n + 1 ā’ q.

488 11. Numerical Solution of Ordinary Diļ¬erential Equations

A well-known two-step explicit method can be obtained by using the

centered ļ¬nite diļ¬erence (10.61) to approximate the ļ¬rst order derivative

in (11.1). This yields the midpoint method

nā„1

un+1 = unā’1 + 2hfn , (11.43)

where u0 = y0 , u1 is to be determined and fk denotes the value f (tk , uk ).

An example of an implicit two-step scheme is the Simpson method, ob-

tained from (11.2) with t0 = tnā’1 and t = tn+1 and by using the Cavalieri-

Simpson quadrature rule to approximate the integral of f

h

nā„1

un+1 = unā’1 + [fnā’1 + 4fn + fn+1 ], (11.44)

3

where u0 = y0 , and u1 is to be determined.

From these examples, it is clear that a multistep method requires q initial

values u0 , . . . , uqā’1 for ātaking oļ¬ā. Since the Cauchy problem provides

only one datum (u0 ), one way to assign the remaining values consists of

resorting to explicit one-step methods of high order. An example is given by

Heunā™s method (11.10), other examples are provided by the Runge-Kutta

methods, which will be introduced in Section 11.8.

In this section we deal with linear multistep methods

p p

un+1 = aj unā’j + h bj fnā’j + hbā’1 fn+1 , n = p, p + 1, . . . (11.45)

j=0 j=0

which are p + 1-step methods, p ā„ 0. For p = 0, we recover one-step

methods.

The coeļ¬cients aj , bj are real and fully identify the scheme; they are

such that ap = 0 or bp = 0. If bā’1 = 0 the scheme is implicit, otherwise it

is explicit.

We can reformulate (11.45) as follows

p+1 p+1

Ī²s f (tn+s , un+s ), n = 0, 1, . . . , Nh ā’ (p + 1) (11.46)

Ī±s un+s = h

s=0 s=0

having set Ī±p+1 = 1, Ī±s = ā’apā’s for s = 0, . . . , p and Ī²s = bpā’s for

s = 0, . . . , p+1. Relation (11.46) is a special instance of the linear diļ¬erence

equation (11.28), where we set k = p + 1 and Ļ•n+j = hĪ²j f (tn+j , un+j ), for

j = 0, . . . , p + 1.

Also for MS methods we can characterize consistency in terms of the local

truncation error, according to the following deļ¬nition.

Deļ¬nition 11.8 The local truncation error (LTE) Ļ„n+1 (h) introduced by

the multistep method (11.45) at tn+1 (for n ā„ p) is deļ¬ned through the

11.5 Multistep Methods 489

following relation

ļ£® ļ£¹

p p

hĻ„n+1 (h) = yn+1 ā’ ļ£° bj ynā’j ļ£» , n ā„ p, (11.47)

aj ynā’j + h

j=0 j=ā’1

where ynā’j = y(tnā’j ) and ynā’j = y (tnā’j ) for j = ā’1, . . . , p.

Analogous to one-step methods, the quantity hĻ„n+1 (h) is the residual gen-

erated at tn+1 if we pretend that the exact solution āsatisļ¬esā the numerical

scheme. Letting Ļ„ (h) = max|Ļ„n (h)|, we have the following deļ¬nition.

n

ńņš. 19 |