<<

. 19
( 26)



>>

The behavior of the discrete wavelets depends on the steps s0 and „0 .
When s0 is close to 1 and „0 is small, the discrete wavelets are close to the
continuous ones. For a ¬xed scale s0 the localization points of the discrete
wavelets along the scale axis are logarithmic as log s = j log s0 . The choice
s0 = 2 corresponds to the dyadic sampling in frequency. The discrete time-
step is „0 sj and, typically, „0 = 1. Hence, the time-sampling step is a
0
function of the scale and along the time axis the localization points of the
wavelet depend on the scale.
For a given function f ∈ L1 (R), the corresponding discrete wavelet trans-
form is

¯
Wf (j, k) = f (t)hj,k (t) dt.
’∞

It is possible to introduce an orthonormal wavelet basis using discrete di-
lation and traslation factors, i.e.

¯ ∀i, j, k, l ∈ Z.
hi,j hk,l (t) dt = δik δjl,
’∞

With an orthogonal wavelet basis, an arbitrary function f can be recon-
structed by the expansion

f (t) = A Wf (j, k)hj,k (t),
j,k∈Z

where A is a constant that does not depend on f .
As of the computational standpoint, the wavelet discrete transform can
be implemented at even a cheaper cost than the FFT algorithm for com-
puting the Fourier transform.
10.13 Applications 463

10.13 Applications
In this section we apply the theory of orthogonal polynomials to solve two
problems arising in quantum physics. In the ¬rst example we deal with
Gauss-Laguerre quadratures, while in the second case the Fourier analysis
and the FFT are considered.

10.13.1 Numerical Computation of Blackbody Radiation
The monochromatic energy density E(ν) of blackbody radiation as a func-
tion of frequency ν is expressed by the following law
ν3
8πh
E(ν) = 3 hν/K T ,
’1
ce B


where h is the Planck constant, c is the speed of light, KB is the Boltz-
mann constant and T is the absolute temperature of the blackbody (see,
for instance, [AF83]).
To compute the total density of monochromatic energy that is emitted
by the blackbody (that is, the emitted energy per unit volume) we must
evaluate the integral
∞ ∞
x3
E(ν)dν = ±T 4
E= dx,
ex ’ 1
0 0

where x = hν/KB T and ± = (8πKB )/(ch)3 1.16 · 10’16 [J][K ’4 ][m’3 ].
4

We also let f (x) = x3 /(ex ’ 1) and I(f ) = 0 f (x)dx.
To approximate I(f ) up to a previously ¬xed absolute error ¤ δ, we com-
pare method 1. introduced in Section 9.8.3 with Gauss-Laguerre quadra-
tures.
In the case of method 1. we proceed as follows. For any a > 0 we let

a
I(f ) = 0 f (x)dx + a f (x)dx and try to ¬nd a function φ such that
∞ ∞
δ
f (x)dx ¤ φ(x)dx ¤ , (10.89)
2
a a

the integral φ(x)dx being “easy” to compute. Once the value of a
a
has been found such that (10.89) is ful¬lled, we compute the integral
a
I1 (f ) = 0 f (x)dx using for instance the adaptive Cavalieri-Simpson for-
mula introduced in Section 9.7.2 and denoted in the following by AD.
A natural choice of a bounding function for f is φ(x) = Kx3 e’x , for a
suitable constant K > 1. Thus, we have K ≥ ex /(ex ’ 1), for any x > 0,
that is, letting x = a, K = ea /(ea ’1). Substituting back into (10.89) yields

a3 + 3a2 + 6a + 6 δ
f (x)dx ¤ = ·(a) ¤ .
ea ’ 1 2
a
464 10. Orthogonal Polynomials in Approximation Theory

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

0

’0.2
0 2 4 6 8 10 12 14 16



FIGURE 10.10. Distribution of quadrature nodes and graph of the integrand
function

Letting δ = 10’3 , we see that (10.89) is satis¬ed by taking a 16. Pro-
gram 77 for computing I1 (f ) with the AD method, setting hmin=10’3 and
tol=5 · 10’4 , yields the approximate value I1 6.4934 with a number of
(nonuniform) partitions equal to 25.
The distribution of the quadrature nodes produced by the adaptive algo-
rithm is plotted in Figure 10.10. Globally, using method 1. yields an approx-
imation of I(f ) equal to 6.4984. Table 10.1 shows, for sake of comparison,
some approximate values of I(f ) obtained using the Gauss-Laguerre formu-
lae with the number of nodes varying between 2 to 20. Notice that, taking
n = 4 nodes, the accuracy of the two computational procedures is roughly
equivalent.

n In (f )
2 6.413727469517582
3 6.481130171540022
4 6.494535639802632
5 6.494313365790864
10 6.493939967652101
15 6.493939402671590
20 6.493939402219742

x3 /(ex ’ 1)dx with
TABLE 10.1. Approximate evaluation of I(f ) = 0
Gauss-Laguerre quadratures



10.13.2 Numerical Solution of Schr¨dinger Equation
o
Let us consider the following di¬erential equation arising in quantum me-
chanics known as the Schr¨dinger equation
o
‚2ψ
‚ψ
=’ x∈R
i , t > 0. (10.90)
2m ‚x2
‚t
The symbols i and denote the imaginary unit and the reduced Planck
constant, respectively. The complex-valued function ψ = ψ(x, t), the solu-
10.13 Applications 465

tion of (10.90), is called a wave function and the quantity |ψ(x, t)|2 de¬nes
the probability density in the space x of a free electron of mass m at time
t (see [FRL55]).
The corresponding Cauchy problem may represent a physical model for
describing the motion of an electron in a cell of an in¬nite lattice (for more
details see, e.g., [AF83]).
Consider the initial condition ψ(x, 0) = w(x), where w is the step func-

tion that takes the value 1/ 2b for |x| ¤ b and is zero for |x| > b, with
b = a/5, and where 2a represents the inter-ionic distance in the lattice.
Therefore, we are searching for periodic solutions, with period equal to 2a.
Solving problem (10.90) can be carried out using Fourier analysis as
follows. We ¬rst write the Fourier series of w and ψ (for any t > 0)
a
N/2’1
1
w(x)e’iπkx/a dx,
wk eiπkx/a ,
w(x) = wk =
2a
k=’N/2 ’a
(10.91)
a
N/2’1
1
ψ(x, t)e’iπkx/a dx.
ψk (t)eiπkx/a , ψk (t) =
ψ(x, t) =
2a
k=’N/2 ’a

Then, we substitute back (10.91) into (10.90), obtaining the following
Cauchy problem for the Fourier coe¬cients ψk , for k = ’N/2, . . . , N/2 ’ 1
± 2

 ψ (t) = ’i kπ
ψk (t),
k
2m a (10.92)


ψk (0) = wk .

The coe¬cients {wk } have been computed by regularizing the coe¬cients
{wk } of the step function w using the Lanczos smoothing (10.56) in order
to avoid the Gibbs phenomenon arising around the discontinuities of w (see
Section 10.9.1).
After solving (10.92), we ¬nally get, recalling (10.91), the following expres-
sion for the wave function
N/2’1
wk e’iEk t/ eiπkx/a ,
ψN (x, t) = (10.93)
k=’N/2


where the coe¬cients Ek = (k 2 π 2 2 )/(2ma2 ) represent, from the physical
standpoint, the energy levels that the electron may assume in its motion
within the potential well.
To compute the coe¬cients wk (and, as a consequence, wk ), we have used
the MATLAB intrinsic function fft (see Section 10.9.2), employing N =
—¦
26 = 64 points and letting a = 10 A= 10’9 [m]. Time analysis has been
carried out up to T = 10 [s], with time steps of 1 [s]; in all the reported
466 10. Orthogonal Polynomials in Approximation Theory

0.35 0.35


0.3 0.3


0.25 0.25


0.2 0.2


0.15 0.15


0.1 0.1


0.05 0.05


0 0
’10 ’5 0 5 10 ’10 ’5 0 5 10



FIGURE 10.11. Probability density |ψ(x, t)|2 at t = 0, 2, 5 [s], corresponding to
a step function as initial datum: solution without ¬ltering (left), with Lanczos
¬ltering (right)



—¦
graphs, the x-axis is measured in [A], while the y-axes are respectively in
units of 105 [m’1/2 ] and 1010 [m’1 ].
In Figure 10.11 we draw the probability density |ψ(x, t)|2 at t = 0, 2
and 5 [s]. The result obtained without the regularizing procedure above is
shown on the left, while the same calculation with the “¬ltering” of the
Fourier coe¬cients is reported on the right. The second plot demonstrates
the smoothing e¬ect on the solution by the regularization, at the price of
a slight enlargement of the step-like initial probability distribution.
Finally, it is interesting to apply Fourier analysis to solve problem (10.90)
starting from a smooth initial datum. For this, we choose an initial probabil-
ity density w of Gaussian form such that w 2 = 1. The solution |ψ(x, t)|2 ,
this time computed without regularization, is shown in Figure 10.12, at
t = 0, 2, 5, 7, 9[s]. Notice the absence of spurious oscillations with respect
to the previous case.

0.2

0.18

0.16

0.14

0.12

0.1

0.08

0.06

0.04

0.02

0
’10 ’5 0 5 10



FIGURE 10.12. Probability density |ψ(x, t)|2 at t = 0, 2, 5, 7, 9[s], corresponding
to an initial datum with Gaussian form
10.14 Exercises 467

10.14 Exercises
1. Prove the three-term relation (10.11).
[Hint: set x = cos(θ), for 0 ¤ θ ¤ π.]
2. Prove (10.31).
[Hint: ¬rst prove that vn n = (vn , vn )1/2 , Tk n = Tk w for k < n and
Tn 2 = 2 Tn 2 (see [QV94], formula (4.3.16)). Then, the thesis follows
n w
from (10.29) multiplying by Tl (l = k) and taking (·, ·)n .]
3. Prove (10.24) after showing that (f ’ ΠGL f ) ¤ Cn1’s f s,ω .
ω
n
[Hint: use the Gagliardo-Nirenberg inequality

max |f (x)| ¤ f 1/2 1/2
f
’1¤x¤1


valid for any f ∈ L2 with f ∈ L2 . Next, use the relation that has been just
shown to prove (10.24).]
1/2
is a norm for Pn .
4. Prove that the discrete seminorm f = (f, f )n
n

5. Compute weights and nodes of the following quadrature formulae

b n
w(x)f (x)dx = ωi f (xi ),
i=0
a


in such a way that the order is maximum, setting

ω(x) = x, a = 0, b = 1, n = 1;
a = ’1,
ω(x) = 2x2 + 1, b = 1, n = 0;
2 if 0 < x ¤ 1,
a = ’1,
ω(x) = b = 1, n = 1.
1 if ’ 1 ¤ x ¤ 0


[Solution: for ω(x) = x, the nodes x1 = 5 + 2 10/7, x2 = 5 ’ 2 10/7
9 9 9 9
are obtained, from which the weights can be computed (order 3); for ω(x) =
2x2 + 1, we get x1√ 3/5 and ω1 = 5/3 (order 1); for ω(x) = 2x2 + 1, we
= √
have x1 = 22 + 22 155, x2 = 22 ’ 22 155 (order 3).]
1 1 1 1


6. Prove (10.40).

[Hint: notice that (ΠGL f, Lj )n = fk (Lk , Lj )n = . . . , distinguishing the
n k
case j < n from the case j = n.]
7. Show that ||| · |||, de¬ned in (10.45), is an essentially strict seminorm.
[Solution : use the Cauchy-Schwarz inequality (1.14) to check that the
triangular inequality is satis¬ed. This proves that ||| · ||| is a seminorm. The
second part of the exercise follows after a direct computation.]
8. Consider in an interval [a, b] the nodes

b’a
1
xj = a + j ’ j = 1, 2, . . . , m
2 m
468 10. Orthogonal Polynomials in Approximation Theory

for m ≥ 1. They are the midpoints of m equally spaced intervals in [a, b].
Let f be a given function; prove that the least-squares polynomial rn with
respect to the weight w(x) = 1 minimizes the error average, de¬ned as
1/2
m
1
[f (xj ) ’ rn (xj )]2
E = lim .
m j=1
m’∞




9. Consider the function
1 2
n
f (x) ’ j
F (a0 , a1 , . . . , an ) = aj x dx
j=0
0

and determine the coe¬cients a0 , a1 , . . . , an in such a way that F is mini-
mized. Which kind of linear system is obtained?
[Hint: enforce the conditions ‚F/‚ai = 0 with i = 0, 1, . . . , n. The matrix
of the ¬nal linear system is the Hilbert matrix (see Example 3.2, Chapter
3) which is strongly ill-conditioned.]
11
Numerical Solution of Ordinary
Di¬erential Equations




In this chapter we deal with the numerical solutions of the Cauchy problem
for ordinary di¬erential equations (henceforth abbreviated by ODEs). After
a brief review of basic notions about ODEs, we introduce the most widely
used techniques for the numerical approximation of scalar equations. The
concepts of consistency, convergence, zero-stability and absolute stability
will be addressed. Then, we extend our analysis to systems of ODEs, with
emphasis on sti¬ problems.



11.1 The Cauchy Problem
The Cauchy problem (also known as the initial-value problem) consists of
¬nding the solution of an ODE, in the scalar or vector case, given suitable
initial conditions. In particular, in the scalar case, denoting by I an interval
of R containing the point t0 , the Cauchy problem associated with a ¬rst
order ODE reads:
¬nd a real-valued function y ∈ C 1 (I), such that

t ∈ I,
y (t) = f (t, y(t)),
(11.1)
y(t0 ) = y0 ,

where f (t, y) is a given real-valued function in the strip S = I —(’∞, +∞),
which is continuous with respect to both variables. Should f depend on t
only through y, the di¬erential equation is called autonomous.
470 11. Numerical Solution of Ordinary Di¬erential Equations

Most of our analysis will be concerned with one single di¬erential equa-
tion (scalar case). The extension to the case of systems of ¬rst-order ODEs
will be addressed in Section 11.9.
If f is continuous with respect to t, then the solution to (11.1) satis¬es
t

y(t) ’ y0 = f („, y(„ ))d„. (11.2)
t0

Conversely, if y is de¬ned by (11.2), then it is continuous in I and y(t0 ) =
y0 . Moreover, since y is a primitive of the continuous function f (·, y(·)),
y ∈ C 1 (I) and satis¬es the di¬erential equation y (t) = f (t, y(t)).
Thus, if f is continuous the Cauchy problem (11.1) is equivalent to the
integral equation (11.2). We shall see later on how to take advantage of
this equivalence in the numerical methods.
Let us now recall two existence and uniqueness results for (11.1).
1. Local existence and uniqueness.
Suppose that f (t, y) is locally Lipschitz continuous at (t0 , y0 ) with
respect to y, that is, there exist two neighborhoods, J ⊆ I of t0 of
width rJ , and Σ of y0 of width rΣ , and a constant L > 0, such that

|f (t, y1 ) ’ f (t, y2 )| ¤ L|y1 ’ y2 | ∀t ∈ J, ∀y1 , y2 ∈ Σ. (11.3)

Then, the Cauchy problem (11.1) admits a unique solution in a neigh-
borhood of t0 with radius r0 with 0 < r0 < min(rJ , rΣ /M, 1/L),
where M is the maximum of |f (t, y)| on J — Σ. This solution is called
the local solution.
Notice that condition (11.3) is automatically satis¬ed if f has con-
tinuous derivative with respect to y: indeed, in such a case it su¬ces
to choose L as the maximum of |‚f (t, y)/‚y| in J — Σ.
2. Global existence and uniqueness. The problem admits a unique
global solution if one can take J = I and Σ = R in (11.3), that is, if
f is uniformly Lipschitz continuous with respect to y.
In view of the stability analysis of the Cauchy problem, we consider the
following problem

t ∈ I,
z (t) = f (t, z(t)) + δ(t),
(11.4)
z(t0 ) = y0 + δ0 ,

where δ0 ∈ R and δ is a continuous function on I. Problem (11.4) is derived
from (11.1) by perturbing both the initial datum y0 and the source func-
tion f . Let us now characterize the sensitivity of the solution z to those
perturbations.
11.1 The Cauchy Problem 471

De¬nition 11.1 ([Hah67], [Ste71] or [PS91]). Let I be a bounded set. The
Cauchy problem (11.1) is stable in the sense of Liapunov (or stable) on I
if, for any perturbation (δ0 , δ(t)) satisfying
|δ0 | < µ, |δ(t)| < µ ∀t ∈ I,
with µ > 0 su¬ciently small to guarantee that the solution to the perturbed
problem (11.4) does exist, then
∃C > 0 independent of µ such that |y(t) ’ z(t)| < Cµ, ∀t ∈ I.
(11.5)
If I has no upperly bound we say that (11.1) is asymptotically stable if, as
well as being Liapunov stable in any bounded interval I, the following limit
also holds
|y(t) ’ z(t)| ’ 0, for t ’ +∞. (11.6)


The requirement that the Cauchy problem is stable is equivalent to requir-
ing that it is well-posed in the sense stated in Chapter 2.
The uniform Lipschitz-continuity of f with respect to y su¬ces to ensure
the stability of the Cauchy problem. Indeed, letting w(t) = z(t) ’ y(t), we
have
w (t) = f (t, z(t)) ’ f (t, y(t)) + δ(t).
Therefore,
t t

[f (s, z(s)) ’ f (s, y(s))] ds + ∀t ∈ I.
w(t) = δ0 + δ(s)ds,
t0 t0

Thanks to previous assumptions, it follows that
t

|w(t)| ¤ (1 + |t ’ t0 |) µ + L |w(s)|ds.
t0

Applying the Gronwall lemma (which we include below for the reader™s
ease) yields
|w(t)| ¤ (1 + |t ’ t0 |) µeL|t’t0 | , ∀t ∈ I
and, thus, (11.5) with C = (1 + KI )eLKI where KI = maxt∈I |t ’ t0 |.

Lemma 11.1 (Gronwall) Let p be an integrable function nonnegative on
the interval (t0 , t0 + T ), and let g and • be two continuous functions on
[t0 , t0 + T ], g being nondecreasing. If • satis¬es the inequality
t

•(t) ¤ g(t) + ∀t ∈ [t0 , t0 + T ],
p(„ )•(„ )d„,
t0
472 11. Numerical Solution of Ordinary Di¬erential Equations

then « 
t

•(t) ¤ g(t) exp  p(„ )d„ , ∀t ∈ [t0 , t0 + T ].
t0


For the proof, see, for instance, [QV94], Lemma 1.4.1.

The constant C that appears in (11.5) could be very large and, in general,
depends on the upper extreme of the interval I, as in the proof above.
For that reason, the property of asymptotic stability is more suitable for
describing the behavior of the dynamical system (11.1) as t ’ +∞ (see
[Arn73]).

As is well-known, only a restricted number of nonlinear ODEs can be
solved in closed form (see, for instance, [Arn73]). Moreover, even when
this is possible, it is not always a straightforward task to ¬nd an explicit
expression of the solution; for example, consider the (very simple) equation
y = (y ’ t)/(y + t), whose solution is only implicitly de¬ned by the relation
(1/2) log(t2 + y 2 ) + tan’1 (y/t) = C, where C is a constant depending on
the initial condition.
For this reason we are interested in numerical methods, since these can
be applied to any ODE under the sole condition that it admits a unique
solution.



11.2 One-Step Numerical Methods
Let us address the numerical approximation of the Cauchy problem (11.1).
Fix 0 < T < +∞ and let I = (t0 , t0 + T ) be the integration interval and,
correspondingly, for h > 0, let tn = t0 + nh, with n = 0, 1, 2, . . . , Nh , be
the sequence of discretization nodes of I into subintervals In = [tn , tn+1 ].
The width h of such subintervals is called the discretization stepsize. Notice
that Nh is the maximum integer such that tNh ¤ t0 + T . Let uj be the
approximation at node tj of the exact solution y(tj ); this solution will be
henceforth shortly denoted by yj . Similarly, fj denotes the value f (tj , uj ).
We obviously set u0 = y0 .

De¬nition 11.2 A numerical method for the approximation of problem
(11.1) is called a one-step method if ∀n ≥ 0, un+1 depends only on un .
Otherwise, the scheme is called a multistep method.

For now, we focus our attention on one-step methods. Here are some of
them:
11.3 Analysis of One-Step Methods 473

1. forward Euler method
un+1 = un + hfn ; (11.7)

2. backward Euler method
un+1 = un + hfn+1 . (11.8)
In both cases, y is approximated through a ¬nite di¬erence: forward and
backward di¬erences are used in (11.7) and (11.8), respectively. Both ¬nite
di¬erences are ¬rst-order approximations of the ¬rst derivative of y with
respect to h (see Section 10.10.1).

3. trapezoidal (or Crank-Nicolson) method
h
un+1 = un + [fn + fn+1 ] . (11.9)
2
This method stems from approximating the integral on the right side of
(11.2) by the trapezoidal quadrature rule (9.11).

4. Heun method
h
un+1 = un + [fn + f (tn+1 , un + hfn )]. (11.10)
2
This method can be derived from the trapezoidal method substituting
f (tn+1 , un + hf (tn , un )) for f (tn+1 , un+1 ) in (11.9) (i.e., using the forward
Euler method to compute un+1 ).
In this last case, we notice that the aim is to transform an implicit method
into an explicit one. Addressing this concern, we recall the following.

De¬nition 11.3 (explicit and implicit methods) A method is called
explicit if un+1 can be computed directly in terms of (some of) the previous
values uk , k ¤ n. A method is said to be implicit if un+1 depends implicitly
on itself through f .
Methods (11.7) and (11.10) are explicit, while (11.8) and (11.9) are implicit.
These latter require at each time step to solving a nonlinear problem if f
depends nonlinearly on the second argument.
A remarkable example of one-step methods are the Runge-Kutta meth-
ods, which will be analyzed in Section 11.8.


11.3 Analysis of One-Step Methods
Any one-step explicit method for the approximation of (11.1) can be cast
in the concise form
0 ¤ n ¤ Nh ’ 1,
un+1 = un + h¦(tn , un , fn ; h), u0 = y0 , (11.11)
474 11. Numerical Solution of Ordinary Di¬erential Equations

where ¦(·, ·, ·; ·) is called an increment function. Letting as usual yn = y(tn ),
analogously to (11.11) we can write

0 ¤ n ¤ Nh ’ 1, (11.12)
yn+1 = yn + h¦(tn , yn , f (tn , yn ); h) + µn+1 ,

where µn+1 is the residual arising at the point tn+1 when we pretend that
the exact solution “satis¬es” the numerical scheme. Let us write the residual
as

µn+1 = h„n+1 (h).

The quantity „n+1 (h) is called the local truncation error (LTE) at the node
tn+1 . We thus de¬ne the global truncation error to be the quantity

|„n+1 (h)|
„ (h) = max
0¤n¤Nh ’1

Notice that „ (h) depends on the solution y of the Cauchy problem (11.1).
The forward Euler™s method is a special instance of (11.11), where

¦(tn , un , fn ; h) = fn ,

while to recover Heun™s method we must set
1
¦(tn , un , fn ; h) = [fn + f (tn + h, un + hfn )] .
2
A one-step explicit scheme is fully characterized by its increment function
¦. This function, in all the cases considered thus far, is such that

∀tn ≥ t0
lim ¦(tn , yn , f (tn , yn ); h) = f (tn , yn ), (11.13)
h’0

Property (11.13), together with the obvious relation yn+1 ’ yn = hy (tn ) +
O(h2 ), ∀n ≥ 0, allows one to obtain from (11.12) that lim „n (h) = 0,
h’0
0 ¤ n ¤ Nh ’ 1. In turn, this condition ensures that

lim „ (h) = 0
h’0

which expresses the consistency of the numerical method (11.11) with the
Cauchy problem (11.1). In general, a method is said to be consistent if its
LTE is in¬nitesimal with respect to h. Moreover, a scheme has order p if,
∀t ∈ I, the solution y(t) of the Cauchy problem (11.1) ful¬lls the condition

„ (h) = O(hp ) for h ’ 0. (11.14)

Using Taylor expansions, as was done in Section 11.2, it can be proved that
the forward Euler method has order 1, while the Heun method has order 2
(see Exercises 1 and 2).
11.3 Analysis of One-Step Methods 475

11.3.1 The Zero-Stability
Let us formulate a requirement analogous to the one for Liapunov stability
(11.5), speci¬cally for the numerical scheme. If (11.5) is satis¬ed with a
constant C independent of h, we shall say that the numerical problem is
zero-stable. Precisely:

De¬nition 11.4 (zero-stability of one-step methods) The numerical
method (11.11) for the approximation of problem (11.1) is zero-stable if

∃h0 > 0, ∃C > 0 : ∀h ∈ (0, h0 ], |zn ’ u(h) | ¤ Cµ, 0 ¤ n ¤ Nh , (11.15)
(h)
n

(h) (h)
where zn , un are respectively the solutions of the problems
±
 z (h) = zn + h ¦(tn , zn , f (tn , zn ); h) + δn+1 ,
(h) (h) (h)
n+1
(11.16)

z0 = y0 + δ0 ,

±
 u(h) = u(h) + h¦(tn , u(h) , f (tn , u(h) ); h),
n n n
n+1
(11.17)
 u =y ,
0 0


for 0 ¤ n ¤ Nh ’ 1, under the assumption that |δk | ¤ µ, 0 ¤ k ¤ Nh .

Zero-stability thus requires that, in a bounded interval, (11.15) holds for
any value h ¤ h0 . This property deals, in particular, with the behavior
of the numerical method in the limit case h ’ 0 and this justi¬es the
name of zero-stability. This latter is therefore a distinguishing property of
the numerical method itself, not of the Cauchy problem (which, indeed,
is stable due to the uniform Lipschitz continuity of f ). Property (11.15)
ensures that the numerical method has a weak sensitivity with respect to
small changes in the data and is thus stable in the sense of the general
de¬nition given in Chapter 2.

Remark 11.1 The constant C in (11.15) is independent of h (and thus
of Nh ), but it can depend on the width T of the integration interval I.
Actually, (11.15) does not exclude a priori the constant C from being an
unbounded function of T .

The request that a numerical method be stable arises, before anything else,
from the need of keeping under control the (unavoidable) errors introduced
by the ¬nite arithmetic of the computer. Indeed, if the numerical method
were not zero-stable, the rounding errors made on y0 as well as in the pro-
cess of computing f (tn , un ) would make the computed solution completely
useless.
476 11. Numerical Solution of Ordinary Di¬erential Equations

Theorem 11.1 (Zero-stability) Consider the explicit one-step method
(11.11) for the numerical solution of the Cauchy problem (11.1). Assume
that the increment function ¦ is Lipschitz continuous with respect to the
second argument, with constant Λ independent of h and of the nodes tj ∈
[t0 , t0 + T ], that is

∃h0 > 0, ∃Λ > 0 : ∀h ∈ (0, h0 ]
(h) (h) (h) (h)
|¦(tn , un , f (tn , un ); h) ’ ¦(tn , zn , f (tn , zn ); h)| (11.18)
(h) (h)
¤ Λ|un ’ zn |, 0 ¤ n ¤ Nh .

Then, method (11.11) is zero-stable.

Proof. Setting wj = zj ’u(h) , by subtracting (11.17) from (11.16) we obtain,
(h) (h)
j
for j = 0, . . . , Nh ’ 1,

(h) (h) (h) (h) (h) (h)
+ h ¦(tj , zj , f (tj , zj ); h) ’ ¦(tj , uj , f (tj , uj ); h) + hδj+1 .
wj+1 = wj

Summing over j gives, for n = 1, . . . , Nh ,
(h) (h)
wn = w0
n’1 n’1
(h) (h) (h) (h)
¦(tj , zj , f (tj , zj ); h) ’ ¦(tj , uj , f (tj , uj ); h) ,
+h δj+1 + h
j=0 j=0


so that, by (11.18)
n’1 n’1
(h)
|wn | ¤ |w0 | + h |δj+1 | + hΛ |wj |, 1 ¤ n ¤ Nh .
(h)
(11.19)
j=0 j=0


Applying the discrete Gronwall lemma, given below, we obtain

|wn | ¤ (1 + hn) µenhΛ , 1 ¤ n ¤ Nh .
(h)



Then (11.15) follows from noticing that hn ¤ T and setting C = (1 + T ) eΛT . 3

Notice that zero-stability implies the boundedness of the solution when f
is linear with respect to the second argument.

Lemma 11.2 (discrete Gronwall) Let kn be a nonnegative sequence and
•n a sequence such that
±
 •0 ¤ g0


n’1 n’1
 •n ¤ g0 + n ≥ 1.
 ps + ks φs ,

s=0 s=0
11.3 Analysis of One-Step Methods 477

If g0 ≥ 0 and pn ≥ 0 for any n ≥ 0, then
n’1 n’1
•n ¤ n ≥ 1.
g0 + ps exp ks ,
s=0 s=0

For the proof, see, for instance, [QV94], Lemma 1.4.2. In the speci¬c case
of the Euler method, checking the property of zero-stability can be done
directly using the Lipschitz continuity of f (we refer the reader to the end
of Section 11.3.2). In the case of multistep methods, the analysis will lead to
the veri¬cation of a purely algebraic property, the so-called root condition
(see Section 11.6.3).


11.3.2 Convergence Analysis
De¬nition 11.5 A method is said to be convergent if

∀n = 0, . . . , Nh , |un ’ yn | ¤ C(h)

where C(h) is an in¬nitesimal with respect to h. In that case, it is said to
be convergent with order p if ∃C > 0 such that C(h) = Chp .
We can prove the following theorem.

Theorem 11.2 (Convergence) Under the same assumptions as in The-
orem 11.1, we have

|yn ’ un | ¤ (|y0 ’ u0 | + nh„ (h)) enhΛ , 1 ¤ n ¤ Nh . (11.20)

Therefore, if the consistency assumption (11.13) holds and |y0 ’ u0 | ’ 0
as h ’ 0, then the method is convergent. Moreover, if |y0 ’ u0 | = O(hp )
and the method has order p, then it is also convergent with order p.
Proof. Setting wj = yj ’ uj , subtracting (11.11) from (11.12) and proceed-
ing as in the proof of the previous theorem yields inequality (11.19), with the
understanding that

w0 = y0 ’ u0 , and δj+1 = „j+1 (h).

The estimate (11.20) is then obtained by applying again the discrete Gronwall
lemma. From the fact that nh ¤ T and „ (h) = O(hp ), we can conclude that
|yn ’ un | ¤ Chp with C depending on T and Λ but not on h. 3

A consistent and zero-stable method is thus convergent. This property is
known as the Lax-Richtmyer theorem or equivalence theorem (the converse:
“a convergent method is zero-stable” being obviously true). This theorem,
which is proven in [IK66], was already advocated in Section 2.2.1 and is a
central result in the analysis of numerical methods for ODEs (see [Dah56] or
478 11. Numerical Solution of Ordinary Di¬erential Equations

[Hen62] for linear multistep methods, [But66], [MNS74] for a wider classes
of methods). It will be considered again in Section 11.5 for the analysis of
multistep methods.

We carry out in detail the convergence analysis in the case of the forward
Euler method, without resorting to the discrete Gronwall lemma. In the
¬rst part of the proof we assume that any operation is performed in exact
arithmetic and that u0 = y0 .
Denote by en+1 = yn+1 ’ un+1 the error at node tn+1 with n = 0, 1, . . .
and notice that

en+1 = (yn+1 ’ u— ) + (u— ’ un+1 ), (11.21)
n+1 n+1

where u—n+1 = yn + hf (tn , yn ) is the solution obtained after one step of
the forward Euler method starting from the initial datum yn (see Figure
11.1). The ¬rst addendum in (11.21) accounts for the consistency error, the
second one for the cumulation of these errors. Then

yn+1 ’ u— = h„n+1 (h), u— ’ un+1 = en + h [f (tn , yn ) ’ f (tn , un )] .
n+1 n+1




yn+1


h„n+1
y(x)
u— en+1
n+1

yn
un+1
un


tn+1
tn
FIGURE 11.1. Geometrical interpretation of the local and global truncation er-
rors at node tn+1 for the forward Euler method

As a consequence,

|en+1 | ¤ h|„n+1 (h)| + |en | + h|f (tn , yn ) ’ f (tn , un )| ¤ h„ (h) + (1 + hL)|en |,

L being the Lipschitz constant of f . By recursion on n, we ¬nd

|en+1 | ¤ [1 + (1 + hL) + . . . + (1 + hL)n ] h„ (h)
eL(tn+1 ’t0 ) ’ 1
(1 + hL)n+1 ’ 1
„ (h) ¤
= „ (h).
L L
11.3 Analysis of One-Step Methods 479

The last inequality follows from noticing that 1 + hL ¤ ehL and (n + 1)h =
tn+1 ’ t0 .
On the other hand, if y ∈ C 2 (I), the LTE for the forward Euler method
is (see Section 10.10.1)
h
y (ξ), ξ ∈ (tn , tn+1 ),
„n+1 (h) =
2
and thus, „ (h) ¤ (M/2)h, where M = maxξ∈I |y (ξ)|. In conclusion,
eL(tn+1 ’t0 ) ’ 1 M
|en+1 | ¤ h, ∀n ≥ 0, (11.22)
L 2
from which it follows that the global error tends to zero with the same
order as the local truncation error.

If also the rounding errors are accounted for, we can assume that the
solution un+1 , actually computed by the forward Euler method at time
¯
tn+1 , is such that
u0 = y0 + ζ0 , un+1 = un + hf (tn , un ) + ζn+1 ,
¯ ¯ ¯ ¯ (11.23)
having denoted the rounding error by ζj , for j ≥ 0.
Problem (11.23) is an instance of (11.16), provided that we identify ζn+1
(h)
and un with hδn+1 and zn in (11.16), respectively. Combining Theorems
¯
11.1 and 11.2 we get, instead of (11.22), the following error estimate
1 M ζ
|yn+1 ’ un+1 | ¤ eL(tn+1 ’t0 ) |ζ0 | +
¯ h+ ,
L 2 h
where ζ = max1¤j¤n+1 |ζj |. The presence of rounding errors does not allow,
therefore, to conclude that as h ’ 0, the error goes to zero. Actually,
there exists an optimal (non null) value of h, hopt , for which the error is
minimized. For h < hopt , the rounding error dominates the truncation error
and the global error increases.


11.3.3 The Absolute Stability
The property of absolute stability is in some way specular to zero-stability,
as far as the roles played by h and I are concerned. Heuristically, we say that
a numerical method is absolutely stable if, for h ¬xed, un remains bounded
as tn ’ +∞. This property, thus, deals with the asymptotic behavior of
un , as opposed to a zero-stable method for which, for a ¬xed integration
interval, un remains bounded as h ’ 0.
For a precise de¬nition, consider the linear Cauchy problem (that from now
on, we shall refer to as the test problem)
y (t) = »y(t), t > 0,
(11.24)
y(0) = 1,
480 11. Numerical Solution of Ordinary Di¬erential Equations

with » ∈ C, whose solution is y(t) = e»t . Notice that lim |y(t)| = 0 if
t’+∞
Re(») < 0.

De¬nition 11.6 A numerical method for approximating (11.24) is abso-
lutely stable if
|un | ’’ 0 tn ’’ +∞.
as (11.25)
Let h be the discretization stepsize. The numerical solution un of (11.24)
obviously depends on h and ». The region of absolute stability of the nu-
merical method is the subset of the complex plane
A = {z = h» ∈ C : (11.25) is satis¬ed } . (11.26)
Thus, A is the set of the values of the product h» for which the numerical
method furnishes solutions that decay to zero as tn tends to in¬nity.


Let us check whether the one-step methods introduced previously are ab-
solutely stable.

1. Forward Euler method: applying (11.7) to problem (11.24) yields un+1 =
un + h»un for n ≥ 0, with u0 = 1. Proceeding recursively on n we get
n ≥ 0.
un = (1 + h»)n ,
Therefore, condition (11.25) is satis¬ed i¬ |1 + h»| < 1, that is, if h» lies
within the unit circle with center at (’1, 0) (see Figure 11.3). This amounts
to requiring that
2Re(»)
h» ∈ C’ and 0 < h < ’ (11.27)
|»|2
where
C’ = {z ∈ C : Re(z) < 0} .

Example 11.1 For the Cauchy problem y (x) = ’5y(x) for x > 0 and y(0) = 1,
condition (11.27) implies 0 < h < 2/5. Figure 11.2 (left) shows the behavior of
the computed solution for two values of h which do not ful¬ll this condition, while
on the right we show the solutions for two values of h that do. Notice that in this

second case the oscillations, if present, damp out as t grows.


2. Backward Euler method: proceeding as before, we get this time
1
n ≥ 0.
un = ,
(1 ’ h»)n
The absolute stability property (11.25) is satis¬ed for any value of h» that
does not belong to the unit circle of center (1, 0) (see Figure 11.3, right).
11.3 Analysis of One-Step Methods 481

3
1

0.8
2
0.6

0.4
1
0.2

0 0

’0.2
’1
’0.4

’0.6
’2
’0.8

’3 ’1
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8



FIGURE 11.2. Left: computed solutions for h = 0.41 > 2/5 (dashed line) and
h = 2/5 (solid line). Notice how, in the limiting case h = 2/5, the oscillations
remain unmodi¬ed as t grows. Right: two solutions are reported for h = 0.39
(solid line) and h = 0.15 (dashed line)

Example 11.2 The numerical solution given by the backward Euler method in
the case of Example 11.1 does not exhibit any oscillation for any value of h. On
the other hand, the same method, if applied to the problem y (t) = 5y(t) for t > 0
and with y(0) = 1, computes a solution that decays anyway to zero as t ’ ∞ if
h > 2/5, despite the fact that the exact solution of the Cauchy problem tends to

in¬nity.


3. Trapezoidal (or Crank-Nicolson) method: we get
n
1 1
1 + »h / 1 ’ »h n ≥ 0,
un = ,
2 2

hence (11.25) is ful¬lled for any h» ∈ C’ .

4. Heun™s method: applying (11.10) to problem (11.24) and proceeding
by recursion on n, we obtain
n
(h»)2
n ≥ 0.
un = 1 + h» + ,
2

As shown in Figure 11.3 the region of absolute stability of Heun™s method
is larger than the corresponding one of Euler™s method. However, its re-
striction to the real axis is the same.
We say that a method is A-stable if A © C’ = C’ , i.e., if for Re(») < 0,
condition (11.25) is satis¬ed for all values of h.
The backward Euler and Crank-Nicolson methods are A-stable, while
the forward Euler and Heun methods are conditionally stable.

Remark 11.2 Notice that the implicit one-step methods examined so far
are unconditionally absolutely stable, while explicit schemes are condition-
482 11. Numerical Solution of Ordinary Di¬erential Equations


Im

1.75
H
BE
FE


’1 Re
1



’1.75



FIGURE 11.3. Regions of absolute stability for the forward (FE) and backward
Euler (BE) methods and for Heun™s method (H). Notice that the region of abso-
lute stability of the BE method lies outside the unit circle of center (1, 0) (shaded
area)

ally absolutely stable. This is, however, not a general rule: in fact, there ex-
ist implicit unstable or only conditionally stable schemes. On the contrary,
there are no explicit unconditionally absolutely stable schemes [Wid67].


11.4 Di¬erence Equations
For any integer k ≥ 1, an equation of the form

un+k + ±k’1 un+k’1 + . . . + ±0 un = •n+k , n = 0, 1, . . . (11.28)

is called a linear di¬erence equation of order k. The coe¬cients ±0 = 0,
±1 , . . . , ±k’1 may or may not depend on n. If, for any n, the right side
•n+k is equal to zero, the equation is said homogeneous, while if the ±j s
are independent of n it is called linear di¬erence equation with constant
coe¬cients.
Di¬erence equations arise for instance in the discretization of ordinary dif-
ferential equations. Regarding this, we notice that all the numerical meth-
ods examined so far end up with equations like (11.28). More generally,
equations like (11.28) are encountered when quantities are de¬ned through
linear recursive relations. Another relevant application is concerned with
the discretization of boundary value problems (see Chapter 12). For fur-
ther details on the subject, we refer to Chapters 2 and 5 of [BO78] and to
Chapter 6 of [Gau97].
11.4 Di¬erence Equations 483

Any sequence {un , n = 0, 1, . . . } of values that satisfy (11.28) is called
a solution to the equation (11.28). Given k initial values u0 , . . . , uk’1 , it
is always possible to construct a solution of (11.28) by computing (sequen-
tially)
un+k = [•n+k ’ (±k’1 un+k’1 + . . . + ±0 un )], n = 0, 1, . . .
However, our interest is to ¬nd an expression of the solution un+k which
depends only on the coe¬cients and on the initial values.
We start by considering the homogeneous case with constant coe¬cients,
un+k + ±k’1 un+k’1 + . . . + ±0 un = 0, n = 0, 1, . . . (11.29)
and associate with (11.29) the characteristic polynomial Π ∈ Pk de¬ned as
Π(r) = rk + ±k’1 rk’1 + . . . + ±1 r + ±0 . (11.30)
Denoting its roots by rj , j = 0, . . . , k ’ 1, any sequence of the form
for j = 0, . . . , k ’ 1
n
rj , n = 0, 1, . . . , (11.31)
is a solution of (11.29), since
n+k n+k’1 n
rj + ±k’1 rj + . . . + ±0 rj
k’1
n k n
= rj rj + ±k’1 rj + . . . + ±0 = rj Π(rj ) = 0.
We say that the k sequences de¬ned in (11.31) are the fundamental solutions
of the homogeneous equation (11.29). Any sequence of the form
n n n
un = γ0 r0 + γ1 r1 + . . . + γk’1 rk’1 , n = 0, 1, . . . (11.32)
is still a solution to (11.29), since it is a linear equation.
The coe¬cients γ0 , . . . , γk’1 can be determined by imposing the k initial
conditions u0 , . . . , uk’1 . Moreover, it can be proved that if all the roots
of Π are simple, then all the solutions of (11.29) can be cast in the form
(11.32).
This last statement no longer holds if there are roots of Π with multi-
plicity greater than 1. If, for a certain j, the root rj has multiplicity m ≥ 2,
in order to obtain a system of fundamental solutions that generate all the
solutions of (11.29), it su¬ces to replace the corresponding fundamental
n
solution rj , n = 0, 1, . . . with the m sequences
n n
nm’1 rj , n = 0, 1, . . . .
n
rj , n = 0, 1, . . . , nrj , n = 0, 1, . . . , . . . ,
More generally, assuming that r0 , . . . , rk are distinct roots of Π, with mul-
tiplicities equal to m0 , . . . , mk , respectively, we can write the solution of
(11.29) as
mj ’1
k
γsj ns n
un = rj , n = 0, 1, . . . . (11.33)
s=0
j=0
484 11. Numerical Solution of Ordinary Di¬erential Equations

Notice that even in presence of complex conjugate roots one can still obtain
a real solution (see Exercise 3).

Example 11.3 For the di¬erence equation un+2 ’un = 0, we have Π(r) = r2 ’1,
then r0 = ’1 and r1 = 1, therefore the solution is given by un = γ00 (’1)n + γ01 .
In particular, enforcing the initial conditions u0 and u1 gives γ00 = (u0 ’ u1 )/2,

γ01 = (u0 + u1 )/2.

Example 11.4 For the di¬erence equation un+3 ’ 2un+2 ’ 7un+1 ’ 4un = 0,
Π(r) = r3 ’ 2r2 ’ 7r ’ 4. Its roots are r0 = ’1 (with multiplicity 2), r1 = 4 and
the solution is un = (γ00 + nγ10 )(’1)n + γ01 4n . Enforcing the initial conditions
we can compute the unknown coe¬cients as the solution of the following linear
system
±
 γ00 + γ01 = u0 ,
’γ00 ’ γ10 + 4γ01 = u1 ,

γ00 + 2γ10 + 16γ01 = u2
that yields γ00 = (24u0 ’ 2u1 ’ u2 )/25, γ10 = (u2 ’ 3u1 ’ 4u0 )/5 and γ01 =

(2u1 + u0 + u2 )/25.

The expression (11.33) is of little practical use since it does not outline
the dependence of un on the k initial conditions. A more convenient rep-
(n)
resentation is obtained by introducing a new set ψj , n = 0, 1, . . . of
fundamental solutions that satisfy
(i)
i, j = 0, 1, . . . , k ’ 1. (11.34)
ψj = δij ,

Then, the solution of (11.29) subject to the initial conditions u0 , . . . , uk’1
is given by
k’1
(n)
un = uj ψj , n = 0, 1, . . . . (11.35)
j=0

(n)
The new fundamental solutions ψj , n = 0, 1, . . . can be represented in
terms of those in (11.31) as follows
k’1
(n)
for j = 0, . . . , k ’ 1, n = 0, 1, . . .
n
ψj = βj,m rm (11.36)
m=0

By requiring (11.34), we obtain the k linear systems
k’1
i, j = 0, . . . , k ’ 1,
i
βj,m rm = δij ,
m=0

whose matrix form is
j = 0, . . . , k ’ 1.
Rbj = ej , (11.37)
11.4 Di¬erence Equations 485

Here ej denotes the unit vector of Rk , R = (rim ) = (rm ) and bj = i

(βj,0 , . . . , βj,k’1 )T . If all rj s are simple roots of Π, the matrix R is nonsin-
gular (see Exercise 5).
The general case where Π has k + 1 distinct roots r0 , . . . , rk with multi-
plicities m0 , . . . , mk respectively, can be dealt with by replacing in (11.36)
rj , n = 0, 1, . . . with rj ns , n = 0, 1, . . . , where j = 0, . . . , k and
n n

s = 0, . . . , mj ’ 1.

Example 11.5 We consider again the di¬erence equation of Example 11.4. Here
we have {r0 , nr0 , r1 , n = 0, 1, . . . } so that the matrix R becomes
n n n

®0 ® 
0
r0 0 r2 1 01
R = ° r0 r0 r2 » = ° ’1 ’1 4 » .
1 1 1
2 2 2
r0 2r0 r2 1 2 16

Solving the three systems (11.37) yields
24 4 1n
(n)
(’1)n ’ n(’1)n +
ψ0 = 4,
25 5 25
2 3 2n
(n)
= ’ (’1)n ’ n(’1)n +
ψ1 4,
25 5 25
1 1 1n
(n)
= ’ (’1)n + n(’1)n +
ψ2 4,
25 5 25
(n)
2
from which it can be checked that the solution un = uj ψ j coincides with
j=0

the one already found in Example 11.4.


Now we return to the case of nonconstant coe¬cients and consider the
following homogeneous equation
k
un+k + ±k’j (n)un+k’j = 0, n = 0, 1, . . . . (11.38)
j=1

The goal is to transform it into an ODE by means of a function F , called
the generating function of the equation (11.38). F depends on the real
variable t and is derived as follows. We require that the n-th coe¬cient
of the Taylor series of F around t = 0 can be written as γn un , for some
unknown constant γn , so that

γn un tn .
F (t) = (11.39)
n=0

The coe¬cients {γn } are unknown and must be determined in such a way
that
® 

k k
°un+k + ±k’j (n)un+k’j » tn ,
(k’j)
cj F (t) = (11.40)
n=0
j=0 j=1
486 11. Numerical Solution of Ordinary Di¬erential Equations

where cj are suitable unknown constants not depending on n. Note that
owing to (11.39) we obtain the ODE
k
cj F (k’j) (t) = 0
j=0


to which we must add the initial conditions F (j) (0) = γj uj for j = 0, . . . , k’
1. Once F is available, it is simple to recover un through the de¬nition of
F itself.

Example 11.6 Consider the di¬erence equation

(n + 2)(n + 1)un+2 ’ 2(n + 1)un+1 ’ 3un = 0, n = 0, 1, . . . (11.41)

with the initial conditions u0 = u1 = 2. We look for a generating function of the
form (11.39). By term-to-term derivation of the series, we get
∞ ∞
γn n(n ’ 1)un tn’2 ,
n’1
F (t) = γn nun t , F (t) =
n=0 n=0

and, after some algebra, we ¬nd
∞ ∞
n’1
γn+1 (n + 1)un+1 tn ,
F (t) = γn nun t =
n=0 n=0
∞ ∞
γn n(n ’ 1)un t n’2
γn+2 (n + 2)(n + 1)un+2 tn .
F (t) = =
n=0 n=0

As a consequence, (11.40) becomes
∞ ∞ ∞
(n + 1)(n + 2)un+2 t ’ 2 (n + 1)un+1 t ’ 3
n n
un tn
n=0 n=0 n=0
∞ ∞ ∞
n n
γn un tn ,
= c0 γn+2 (n + 2)(n + 1)un+2 t + c1 γn+1 (n + 1)un+1 t + c2
n=0 n=0 n=0

so that, equating both sides, we ¬nd

γn = 1 ∀n ≥ 0, c0 = 1, c1 = ’2, c2 = ’3.

We have thus associated with the di¬erence equation the following ODE with
constant coe¬cients

F (t) ’ 2F (t) ’ 3F (t) = 0,

with the initial condition F (0) = F (0) = 2. The n-th coe¬cient of the solution
F (t) = e3t + e’t is
1 (n) 1
[(’1)n + 3n ] ,
F (0) =
n! n!

so that un = (1/n!) [(’1)n + 3n ] is the solution of (11.41).
11.5 Multistep Methods 487



The nonhomogeneous case (11.28) can be tackled by searching for solutions
of the form
un = u(0) + u(•) ,
n n
(0) (•)
where un is the solution of the associated homogeneous equation and un
is a particular solution of the nonhomogeneous equation. Once the solution
of the homogeneous equation is available, a general technique to obtain
the solution of the nonhomogeneous equation is based on the method of
variation of parameters, combined with a reduction of the order of the
di¬erence equation (see [BO78]).
In the special case of di¬erence equations with constant coe¬cients, with
•n of the form cn Q(n), where c is a constant and Q is a polynomial of degree
p with respect to the variable n, a possible approach is that of undetermined
coe¬cients, where one looks for a particular solution that depends on some
undetermined constants and has a known form for some classes of right
sides •n . It su¬ces to look for a particular solution of the form

u(•) = cn (bp np + bp’1 np’1 + . . . + b0 ),
n

(•)
where bp , . . . , b0 are constants to be determined in such a way that un is
actually a solution of (11.28).

Example 11.7 Consider the di¬erence equation un+3 ’un+2 +un+1 ’un = 2n n2 .
The particular solution is of the form un = 2n (b2 n2 + b1 n + b0 ). Substituting this
solution into the equation, we ¬nd 5b2 n2 +(36b2 +5b1 )n+(58b2 +18b1 +5b0 ) = n2 ,
from which, recalling the principle of identity for polynomials, one gets b2 = 1/5,
b1 = ’36/25 and b0 = 358/125. •


Analogous to the homogeneous case, it is possible to express the solution
of (11.28) as
k’1 n
(n) (n’l+k’1)
un = uj ψj + •l ψk’1 , n = 0, 1, . . . (11.42)
j=0 l=k

(i)
where we de¬ne ψk’1 ≡ 0 for all i < 0 and •j ≡ 0 for all j < k.


11.5 Multistep Methods
Let us now introduce some examples of multistep methods (shortly, MS).

De¬nition 11.7 (q-steps methods) A q-step method (q ≥ 1) is one
which, ∀n ≥ q ’ 1, un+1 depends on un+1’q , but not on the values uk
with k < n + 1 ’ q.
488 11. Numerical Solution of Ordinary Di¬erential Equations

A well-known two-step explicit method can be obtained by using the
centered ¬nite di¬erence (10.61) to approximate the ¬rst order derivative
in (11.1). This yields the midpoint method

n≥1
un+1 = un’1 + 2hfn , (11.43)

where u0 = y0 , u1 is to be determined and fk denotes the value f (tk , uk ).
An example of an implicit two-step scheme is the Simpson method, ob-
tained from (11.2) with t0 = tn’1 and t = tn+1 and by using the Cavalieri-
Simpson quadrature rule to approximate the integral of f
h
n≥1
un+1 = un’1 + [fn’1 + 4fn + fn+1 ], (11.44)
3
where u0 = y0 , and u1 is to be determined.
From these examples, it is clear that a multistep method requires q initial
values u0 , . . . , uq’1 for “taking o¬”. Since the Cauchy problem provides
only one datum (u0 ), one way to assign the remaining values consists of
resorting to explicit one-step methods of high order. An example is given by
Heun™s method (11.10), other examples are provided by the Runge-Kutta
methods, which will be introduced in Section 11.8.

In this section we deal with linear multistep methods
p p
un+1 = aj un’j + h bj fn’j + hb’1 fn+1 , n = p, p + 1, . . . (11.45)
j=0 j=0

which are p + 1-step methods, p ≥ 0. For p = 0, we recover one-step
methods.
The coe¬cients aj , bj are real and fully identify the scheme; they are
such that ap = 0 or bp = 0. If b’1 = 0 the scheme is implicit, otherwise it
is explicit.
We can reformulate (11.45) as follows
p+1 p+1
βs f (tn+s , un+s ), n = 0, 1, . . . , Nh ’ (p + 1) (11.46)
±s un+s = h
s=0 s=0

having set ±p+1 = 1, ±s = ’ap’s for s = 0, . . . , p and βs = bp’s for
s = 0, . . . , p+1. Relation (11.46) is a special instance of the linear di¬erence
equation (11.28), where we set k = p + 1 and •n+j = hβj f (tn+j , un+j ), for
j = 0, . . . , p + 1.
Also for MS methods we can characterize consistency in terms of the local
truncation error, according to the following de¬nition.

De¬nition 11.8 The local truncation error (LTE) „n+1 (h) introduced by
the multistep method (11.45) at tn+1 (for n ≥ p) is de¬ned through the
11.5 Multistep Methods 489

following relation
® 
p p
h„n+1 (h) = yn+1 ’ ° bj yn’j » , n ≥ p, (11.47)
aj yn’j + h
j=0 j=’1


where yn’j = y(tn’j ) and yn’j = y (tn’j ) for j = ’1, . . . , p.

Analogous to one-step methods, the quantity h„n+1 (h) is the residual gen-
erated at tn+1 if we pretend that the exact solution “satis¬es” the numerical
scheme. Letting „ (h) = max|„n (h)|, we have the following de¬nition.
n


<<

. 19
( 26)



>>