<<

. 20
( 26)



>>

De¬nition 11.9 (Consistency) The multistep method (11.45) is consis-
tent if „ (h) ’ 0 as h ’ 0. Moreover, if „ (h) = O(hq ), for some q ≥ 1, then
the method is said to have order q.

A more precise characterization of the LTE can be given by introducing the
following linear operator L associated with the linear MS method (11.45)
p p
L[w(t); h] = w(t + h) ’ aj w(t ’ jh) ’ h bj w (t ’ jh), (11.48)
j=0 j=’1


where w ∈ C 1 (I) is an arbitrary function. Notice that the LTE is exactly
L[y(tn ); h]. If we assume that w is su¬ciently smooth and expand w(t’jh)
and w (t ’ jh) about t ’ ph, we obtain

L[w(t); h] = C0 w(t ’ ph) + C1 hw(1) (t ’ ph) + . . . + Ck hk w(k) (t ’ ph) + . . .

Consequently, if the MS method has order q and y ∈ C q+1 (I), we obtain

„n+1 (h) = Cq+1 hq+1 y (q+1) (tn’p ) + O(hq+2 ).

The term Cq+1 hq+1 y (q+1) (tn’p ) is the so-called principal local truncation
error (PLTE) while Cq+1 is the error constant. The PLTE is widely em-
ployed in devising adaptive strategies for MS methods (see [Lam91], Chap-
ter 3).

Program 92 provides an implementation of the multistep method in the
form (11.45) for the solution of a Cauchy problem on the interval (t0 , T ).
The input parameters are: the column vector a containing the p + 1 co-
e¬cients ai ; the column vector b containing the p + 2 coe¬cients bi ; the
discretization stepsize h; the vector of initial data u0 at the corresponding
time instants t0; the macros fun and dfun containing the functions f and
‚f /‚y. If the MS method is implicit, a tolerance tol and a maximum num-
ber of admissible iterations itmax must be provided. These two parameters
monitor the convergence of Newton™s method that is employed to solve the
490 11. Numerical Solution of Ordinary Di¬erential Equations

nonlinear equation (11.45) associated with the MS method. In output the
code returns the vectors u and t containing the computed solution at the
time instants t.

Program 92 - multistep : Linear multistep methods
function [t,u] = multistep (a,b,tf,t0,u0,h,fun,dfun,tol,itmax)
y = u0; t = t0; f = eval (fun); p = length(a) - 1; u = u0;
nt = ¬x((tf - t0 (1) )/h);
for k = 1:nt
lu=length(u);
G = a™ *u (lu:-1:lu-p)+ h * b(2:p+2)™ * f(lu:-1:lu-p);
lt = length(t0); t0 = [t0; t0(lt)+h]; unew = u (lu);
t = t0 (lt+1); err = tol + 1; it = 0;
while (err > tol) & (it <= itmax)
y = unew; den = 1 - h * b (1) * eval(dfun);
fnew = eval (fun);
if den == 0
it = itmax + 1;
else
it = it + 1;
unew = unew - (unew - G - h * b (1) * fnew)/den;
err = abs (unew - y);
end
end
u = [u; unew]; f = [f; fnew];
end
t = t0;
In the forthcoming sections we examine some families of multistep methods.


11.5.1 Adams Methods
These methods are derived from the integral form (11.2) through an ap-
proximate evaluation of the integral of f between tn and tn+1 . We suppose
that the discretization nodes are equally spaced, i.e., tj = t0 + jh, with
h > 0 and j ≥ 1, and then we integrate, instead of f , its interpolating
polynomial on p + 1 distinct nodes. The resulting schemes are thus consis-
tent by construction and have the following form
p
n ≥ p.
un+1 = un + h bj fn’j , (11.49)
j=’1

The interpolation nodes can be either:
1. tn , tn’1 , . . . , tn’p (in this case b’1 = 0 and the resulting method is
explicit);
or
11.5 Multistep Methods 491

2. tn+1 , tn , . . . , tn’p+1 (in this case b’1 = 0 and the scheme is implicit).

The implicit schemes are called Adams-Moulton methods, while the explicit
ones are called Adams-Bashforth methods.

Adams-Bashforth methods (AB)

Taking p = 0 we recover the forward Euler method, since the interpolating
polynomial of degree zero at node tn is given by Π0 f = fn . For p = 1, the
linear interpolating polynomial at the nodes tn’1 and tn is

fn’1 ’ fn
Π1 f (t) = fn + (t ’ tn ) .
tn’1 ’ tn

Since Π1 f (tn ) = fn and Π1 f (tn+1 ) = 2fn ’ fn’1 , we get
tn+1
h h
[Π1 f (tn ) + Π1 f (tn+1 )] = [3fn ’ fn’1 ] .
Π1 f (t) =
2 2
tn

Therefore, the two-step AB method is

h
[3fn ’ fn’1 ] .
un+1 = un + (11.50)
2
With a similar procedure, if p = 2, we ¬nd the three-step AB method

h
[23fn ’ 16fn’1 + 5fn’2 ] ,
un+1 = un +
12
while for p = 3 we get the four-step AB scheme

h
(55fn ’ 59fn’1 + 37fn’2 ’ 9fn’3 ) .
un+1 = un +
24
In general, q-step Adams-Bashforth methods have order q. The error con-

stants Cq+1 of these methods are collected in Table 11.1.

Adams-Moulton methods (AM)

If p = ’1, the Backward Euler scheme is recovered, while if p = 0, we
construct the linear polynomial interpolating f at the nodes tn and tn+1
to recover the Crank-Nicolson scheme (11.9). In the case of the two-step
method, the polynomial of degree 2 interpolating f at the nodes tn’1 , tn ,
tn+1 is generated, yielding the following scheme

h
[5fn+1 + 8fn ’ fn’1 ] .
un+1 = un + (11.51)
12
492 11. Numerical Solution of Ordinary Di¬erential Equations

The methods corresponding to p = 3 and 4 are respectively given by
h
(9fn+1 + 19fn ’ 5fn’1 + fn’2 )
un+1 = un +
24

h
(251fn+1 + 646fn ’ 264fn’1 + 106fn’2 ’ 19fn’3 ) .
un+1 = un +
720
The q-step Adams-Moulton methods have order q + 1 and their error con-
stants Cq+1 are summarized in Table 11.1.
— —
q Cq+1 Cq+1 q Cq+1 Cq+1

’1 ’ 24
1 3 1
1 3
2 2 8

’ 12 ’ 720
5 1 251 19
2 4
12 720

TABLE 11.1. Error constants for Adams-Bashforth methods (having order q) and
Adams-Moulton methods (having order q + 1)



11.5.2 BDF Methods
The so-called backward di¬erentiation formulae (henceforth denoted by
BDF) are implicit MS methods derived from a complementary approach
to the one followed for the Adams methods. In fact, for the Adams meth-
ods we have resorted to numerical integration for the source function f ,
whereas in BDF methods we directly approximate the value of the ¬rst
derivative of y at node tn+1 through the ¬rst derivative of the polynomial
interpolating y at the p + 1 nodes tn+1 , tn , . . . , tn’p+1 .
By doing so, we get schemes of the form
p
un+1 = aj un’j + hb’1 fn+1
j=0

with b’1 = 0. Method (11.8) represents the most elementary example,
corresponding to the coe¬cients a0 = 1 and b’1 = 1.
We summarize in Table 11.2 the coe¬cients of BDF methods that are
zero-stable. In fact, we shall see in Section 11.6.3 that only for p ¤ 5 are
BDF methods zero-stable (see [Cry73]).


11.6 Analysis of Multistep Methods
Analogous to what has been done for one-step methods, in this section
we provide algebraic conditions that ensure consistency and stability of
multistep methods.
11.6 Analysis of Multistep Methods 493

p a0 a1 a2 a3 a4 a5 b’1
0 1 0 0 0 0 0 1
4
-1 2
1 0 0 0 0
3 3 3
18 9 2 6
2 - 11 0 0 0
11 11 11
48
- 36 16 3 12
3 - 25 0 0
25 25 25 25
300 300 200 75 12 60
4 - 137 - 137 0
137 137 137 137
360
- 450 400
- 225 72 10 60
5 - 147
147 147 147 147 147 137

TABLE 11.2. Coe¬cients of zero-stable BDF methods for p = 0, 1, . . . , 5

11.6.1 Consistency
The property of consistency of a multistep method introduced in De¬nition
11.9 can be veri¬ed by checking that the coe¬cients satisfy certain algebraic
equations, as stated in the following theorem.

Theorem 11.3 The multistep method (11.45) is consistent i¬ the following
algebraic relations among the coe¬cients are satis¬ed
p p p
aj = 1, ’ jaj + bj = 1. (11.52)
j=0 j=0 j=’1

Moreover, if y ∈ C q+1 (I) for some q ≥ 1, where y is the solution of the
Cauchy problem (11.1), then the method is of order q i¬ (11.52) holds and
the following additional conditions are satis¬ed
p p
i
(’j)i’1 bj = 1, i = 2, . . . , q.
(’j) aj + i
j=0 j=’1

Proof. Expanding y and f in a Taylor series yields, for any n ≥ p
yn’j = yn ’ jhyn + O(h2 ), fn’j = fn + O(h). (11.53)
Plugging these values back into the multistep scheme and neglecting the terms
in h of order higher than 1 gives
p p
yn+1 ’ aj yn’j ’ h bj fn’j
j=0 j=’1
p p p p p
= yn+1 ’ jaj yn ’ h bj fn ’ O(h ) aj ’
2
a j yn + h bj
j=0 j=0 j=’1 j=0 j=’1
p p p p p
= yn+1 ’ aj yn ’ hyn ’ ’ O(h2 ) aj ’
jaj + bj bj
j=0 j=0 j=’1 j=0 j=’1

where we have replaced yn by fn . From the de¬nition (11.47) we then obtain
p p p p p
h„n+1 (h) = yn+1 ’ aj yn ’ hyn ’ ’ O(h ) aj ’
2
jaj + bj bj
j=0 j=0 j’1 j=0 j=’1
494 11. Numerical Solution of Ordinary Di¬erential Equations

hence the local truncation error is
p
yn+1 ’ yn yn
1’
„n+1 (h) = + aj
h h j=0
p p p p
jaj ’ ’ O(h) aj ’
+yn bj bj .
j=0 j=’1 j=0 j=’1


Since, for any n, (yn+1 ’ yn )/h ’ yn , as h ’ 0, it follows that „n+1 (h) tends to
0 as h goes to 0 i¬ the algebraic conditions (11.52) are satis¬ed. The rest of the
proof can be carried out in a similar manner, accounting for terms of progressively
3
higher order in the expansions (11.53).


11.6.2 The Root Conditions
Let us employ the multistep method (11.45) to approximately solve the
model problem (11.24). The numerical solution satis¬es the linear di¬erence
equation
p p
un+1 = aj un’j + h» bj un’j , (11.54)
j=0 j=’1

which ¬ts the form (11.29). We can therefore apply the theory devel-
oped in Section 11.4 and look for fundamental solutions of the form uk =
[ri (h»)]k , k = 0, 1, . . . , where ri (h»), for i = 0, . . . , p, are the roots of the
polynomial Π ∈ Pp+1

Π(r) = ρ(r) ’ h»σ(r). (11.55)

We have denoted by
p p
’ p’j
bj rp’j
p+1 p+1
ρ(r) = r aj r , σ(r) = b’1 r +
j=0 j=0

the ¬rst and second characteristic polynomials of the multistep method
(11.45), respectively. The polynomial Π(r) is the characteristic polynomial
associated with the di¬erence equation (11.54), and rj (h») are its charac-
teristic roots.
The roots of ρ are ri (0), i = 0, . . . , p, and will be abbreviated henceforth
by ri . From the ¬rst condition in (11.52) it follows that if a multistep
method is consistent then 1 is a root of ρ. We shall assume that such a root
(the consistency root) is labelled as r0 (0) = r0 and call the corresponding
root r0 (h») the principal root.

De¬nition 11.10 (Root condition) The multistep method (11.45) is said
to satisfy the root condition if all roots ri are contained within the unit
11.6 Analysis of Multistep Methods 495

circle centered at the origin of the complex plane, otherwise, if they fall on
its boundary, they must be simple roots of ρ. Equivalently,

|rj | ¤ 1, j = 0, . . . , p;
(11.56)
furthermore, for those j such that |rj | = 1, then ρ (rj ) = 0.



De¬nition 11.11 (Strong root condition) The multistep method (11.45)
is said to satisfy the strong root condition if it satis¬es the root condition
and r0 = 1 is the only root lying on the boundary of the unit circle. Equiv-
alently,

|rj | < 1 j = 1, . . . , p. (11.57)




De¬nition 11.12 (Absolute root condition) The multistep method
(11.45) satis¬es the absolute root condition if there exists h0 > 0 such that

|rj (h»)| < 1 ∀h ¤ h0 .
j = 0, . . . , p,




11.6.3 Stability and Convergence Analysis for Multistep
Methods
Let us now examine the relation between root conditions and the stability
of multistep methods. Generalizing the De¬nition 11.4, we can get the
following.

De¬nition 11.13 (Zero-stability of multistep methods) The multi-
step method (11.45) is zero-stable if

∃h0 > 0, ∃C > 0 : ∀h ∈ (0, h0 ], |zn ’ u(h) | ¤ Cµ, 0 ¤ n ¤ Nh ,
(h)
n
(11.58)

(h) (h)
where Nh = max {n : tn ¤ t0 + T } and zn and un are, respectively, the
solutions of problems
± p p
 (h)
z (h) (h)
n+1 = aj zn’j + h bj f (tn’j , zn’j ) + hδn+1 ,
(11.59)
 (h) j=0 j=’1
 (h)
zk = wk + δk , k = 0, . . . , p
496 11. Numerical Solution of Ordinary Di¬erential Equations
± p p
 (h)
u (h) (h)
n+1 = aj un’j + h bj f (tn’j , un’j ),
(11.60)
 (h) j=0 j=’1
 (h)
uk = wk , k = 0, . . . , p
(h) (h)
for p ¤ n ¤ Nh ’ 1, where |δk | ¤ µ, 0 ¤ k ¤ Nh , w0 = y0 and wk ,
k = 1, . . . , p, are p initial values generated by using another numerical
scheme.

Theorem 11.4 (Equivalence of zero-stability and root condition)
For a consistent multistep method, the root condition is equivalent to zero-
stability.
Proof. Let us begin by proving that the root condition is necessary for the zero-
stability to hold. We proceed by contradiction and assume that the method is
zero-stable and there exists a root ri which violates the root condition.
Since the method is zero-stable, condition (11.58) must be satis¬ed for any
Cauchy problem, in particular for the problem y (t) = 0 with y(0) = 0, whose
(h)
solution is, clearly, the null function. Similarly, the solution un of (11.60) with
(h)
f = 0 and wk = 0 for k = 0, . . . , p is identically zero.
Consider ¬rst the case |ri | > 1. Then, de¬ne
±
 µri if ri ∈ R,
n

δn =
 µ(r + r )n if r ∈ C,
¯i
i i

(h)
for µ > 0. It is simple to check that the sequence zn = δn for n = 0, 1, . . .
(h)
is a solution of (11.59) with initial conditions zk = δk and that |δk | ¤ µ for
¯
k = 0, 1, . . . , p. Let us now choose t in (t0 , t0 + T ) and let xn be the nearest grid
(h) (h)
node to t. Clearly, n is the integral part of t/h and limh’0 |zn | = limh’0 |un ’
¯ ¯
(h) (h) (h)
zn | ’ ∞ as h ’ 0. This proves that |un ’ zn | cannot be uniformly bounded
with respect to h as h ’ 0, which contradicts the assumption that the method
is zero-stable.
A similar proof can be carried out if |ri | = 1 but has multiplicity greater than
1, provided that one takes into account the form of the solution (11.33).

Let us now prove that the root condition is su¬cient for method (11.45) to
(h) (h)
be zero-stable. Recalling (11.46) and denoting by zn+j and un+j the solutions
to (11.59) and (11.60), respectively, for j ≥ 1, it turns out that the function
(h) (h) (h)
wn+j = zn+j ’ un+j satis¬es the following di¬erence equation
p+1
(h)
n = 0, . . . , Nh ’ (p + 1),
±j wn+j = •n+p+1 , (11.61)
j=0

having set
p+1
(h) (h)
βj f (tn+j , zn+j ) ’ f (tn+j , un+j ) + hδn+p+1 .
•n+p+1 = h (11.62)
j=0
11.6 Analysis of Multistep Methods 497

(n)
Denote by ψj a sequence of fundamental solutions to the homogeneous equa-
tion associated with (11.61). Recalling (11.42), the general solution of (11.61) is
given by
p n
(h) (h) (n) (n’l+p)
wn = wj ψj + ψp •l , n = p + 1, . . .
j=0 l=p+1


The following result expresses the connection between the root condition and the
boundedness of the solution of a di¬erence equation (for the proof, see [Gau97],
Theorem 6.3.2).

Lemma 11.3 There exists a constant M > 0 for any solution {un } of the dif-
ference equation (11.28) such that

n
|un | ¤ M |uj | + |•l |
max , n = 0, 1, . . . (11.63)
j=0,... ,k’1
l=k


i¬ the root condition is satis¬ed for the polynomial (11.30), i.e., (11.56) holds for
the zeros of the polynomial (11.30).
(n)
Let us now recall that, for any j, {ψj } is solution of a homogeneous di¬erence
(i)
equation whose initial data are ψj = δij , i, j = 0, . . . , p. On the other hand, for
(n’l+p)
any l, ψp is solution of a di¬erence equation with zero initial conditions and
right-hand sides equal to zero except for the one corresponding to n = l which is
(p)
ψp = 1.
Therefore, Lemma 11.3 can be applied in both cases so we can conclude that
(n) (n’l+p)
there exists a constant M > 0 such that |ψj | ¤ M and |ψp | ¤ M,
uniformly with respect to n and l. The following estimate thus holds
± 
 
n
(h) (h)
|wn | ¤ M (p + 1) max |wj | + |•l | , n = 0, 1, . . . , Nh . (11.64)
 
j=0,... ,p
l=p+1


If L denotes the Lipschitz constant of f , from (11.62) we get

p+1
(h)
|•n+p+1 | ¤ h |βj |L |wn+j | + h|δn+p+1 |.
max
j=0,... ,p+1
j=0


|βj | and ∆[q,r] = max |δj+q |, q and r being some integers
Let β = max
j=0,... ,p+1 j=q,... ,r
with q ¤ r. From (11.64), the following estimate is therefore obtained
± 
 
p+1
n
(h) (h)
|wn | ¤ |wl’p’1+j | + Nh h∆[p+1,n]
M (p + 1)∆[0,p] + hβL
 
l=p+1 j=0
n
¤ |wm | + T ∆[p+1,n]
(h)
M (p + 1)∆[0,p] + hβL(p + 2) .
m=0
498 11. Numerical Solution of Ordinary Di¬erential Equations

Let Q = 2(p + 2)βLM and h0 = 1/Q, so that 1 ’ h Q ≥ if h ¤ h0 . Then
1
2 2

1 (h) (h)
|wn | ¤ |wn |(1 ’ h Q )
2
2
n’1
¤ |wm | + T ∆[p+1,n]
(h)
M (p + 1)∆[0,p] + hβL(p + 2) .
m=0


Letting R = 2M (p + 1)∆[0,p] + T ∆[p+1,n] , we ¬nally obtain
n’1
|wn | ¤ hQ |wm | + R.
(h) (h)

m=0

(h)
Applying Lemma 11.2 with the following identi¬cations: •n = |wn |, g0 = R,
ps = 0 and ks = hQ for any s = 0, . . . , n ’ 1, yields

|wn | ¤ 2M eT Q (p + 1)∆[0,p] + T ∆[p+1,n] .
(h)
(11.65)

Method (11.45) is thus zero-stable for any h ¤ h0 . 3

Theorem 11.4 allows for characterizing the stability behavior of several
families of discretization methods.
In the special case of consistent one-step methods, the polynomial ρ ad-
mits only the root r0 = 1. They thus automatically satisfy the root condition
and are zero-stable.
For the Adams methods (11.49), the polynomial ρ is always of the form
ρ(r) = rp+1 ’ rp . Thus, its roots are r0 = 1 and r1 = 0 (with multiplicity
p) so that all Adams methods are zero-stable.
Also the midpoint method (11.43) and Simpson method (11.44) are zero-
stable: for both of them, the ¬rst characteristic polynomial is ρ(r) = r2 ’ 1,
so that r0 = 1 and r1 = ’1.
Finally, the BDF methods of Section 11.5.2 are zero-stable provided that
p ¤ 5, since in such a case the root condition is satis¬ed (see [Cry73]).

We are in position to give the following convergence result.

Theorem 11.5 (Convergence) A consistent multistep method is conver-
gent i¬ it satis¬es the root condition and the error on the initial data tends
to zero as h ’ 0. Moreover, the method converges with order q if it has
order q and the error on the initial data tends to zero as O(hq ).
Proof. Suppose that the MS method is consistent and convergent. To prove that
the root condition is satis¬ed, we refer to the problem y (t) = 0 with y(0) = 0
and on the interval I = (0, T ). Convergence means that the numerical solution
{un } must tend to the exact solution y(t) = 0 for any converging set of initial
data uk , k = 0, . . . , p, i.e. max |uk | ’ 0 as h ’ 0. From this observation, the
k=0,... ,p
proof follows by contradiction along the same lines as the proof of Theorem 11.4,
where the parameter µ is now replaced by h.
11.6 Analysis of Multistep Methods 499



Let us now prove that consistency, together with the root condition, implies
convergence under the assumption that the error on the initial data tends to zero
(h)
as h ’ 0. We can apply Theorem 11.4, setting un = un (approximate solution
(h)
of the Cauchy problem) and zn = yn (exact solution), and from (11.47) it turns
out that δm = „m (h). Then, due to (11.65), for any n ≥ p + 1 we obtain

|un ’ yn | ¤ 2M eT Q (p + 1) max |uj ’ yj | + T |„j (h)| .
max
j=0,... ,p j=p+1,... ,n


Convergence holds by noticing that the right-hand side of this inequality tends
to zero as h ’ 0. 3

A remarkable consequence of the above theorem is the following equivalence
Lax-Richtmyer theorem.

Corollary 11.1 (Equivalence theorem) A consistent multistep method
is convergent i¬ it is zero-stable and if the error on the initial data tends
to zero as h tends to zero.


We conclude this section with the following result, which establishes an
upper limit for the order of multistep methods (see [Dah63]).

Property 11.1 (First Dahlquist barrier) There isn™t any zero-stable,
p-step linear multistep method with order greater than p + 1 if p is odd,
p + 2 if p is even.


11.6.4 Absolute Stability of Multistep Methods
Consider again the di¬erence equation (11.54), which was obtained by ap-
plying the MS method (11.45) to the model problem (11.24). According to
(11.33), its solution takes the form
mj ’1
k
γsj ns [rj (h»)]n ,
un = n = 0, 1, . . .
s=0
j=1

where rj (h»), j = 1, . . . , k , are the distinct roots of the characteristic
polynomial (11.55), and having denoted by mj the multiplicity of rj (h»).
In view of (11.25), it is clear that the absolute root condition introduced
by De¬nition 11.12 is necessary and su¬cient to ensure that the multistep
method (11.45) is absolutely stable as h ¤ h0 .
Among the methods enjoying the absolute stability property, the pref-
erence should go to those for which the region of absolute stability A,
introduced in (11.26), is as wide as possible or even unbounded. Among
these are the A-stable methods introduced at the end of Section 11.3.3 and
500 11. Numerical Solution of Ordinary Di¬erential Equations

the ‘-stable methods, for which A contains the angular region de¬ned by
z ∈ C such that ’‘ < π ’ arg(z) < ‘, with ‘ ∈ (0, π/2). A-stable meth-
ods are of remarkable importance when solving sti¬ problems (see Section
11.10).


0000000000000
1111111111111 1111111111111
0000000000000
1111111111111
0000000000000 1111111111111
0000000000000
Im Im
0000000000000
1111111111111 0000000000000
1111111111111
1111111111111
0000000000000 0000000000000
1111111111111
0000000000000
1111111111111 1111111111111
0000000000000
1111111111111
0000000000000 0000000000000
1111111111111
0000000000000
1111111111111 0000000000000
1111111111111
0000000000000
1111111111111 0000000000000
1111111111111
0000000000000
1111111111111 1111111111111
0000000000000
1111111111111
0000000000000 0000000000000
1111111111111
0000000000000
1111111111111 1111111111111
0000000000000
0000000000000
1111111111111 1111111111111
0000000000000
1111111111111
0000000000000 0000000000000
1111111111111
0000000000000
1111111111111 0000000000000
1111111111111
0000000000000
1111111111111 0000000000000
1111111111111
0000000000000
1111111111111 0000000000000
1111111111111
0000000000000
1111111111111 0000000000000
1111111111111

0000000000000
1111111111111 1111111111111
0000000000000
0000000000000
1111111111111 0000000000000
1111111111111
1111111111111
0000000000000 1111111111111
0000000000000
1111111111111 1111111111111
0000000000000
0000000000000 1111111111111
0000000000000
1111111111111
0000000000000 1111111111111
0000000000000 Re
0000000000000 Re
1111111111111 0000000000000
1111111111111
0000000000000
1111111111111 ‘
0000000000000
1111111111111
0000000000000
1111111111111 1111111111111
0000000000000
1111111111111
0000000000000 1111111111111
0000000000000
1111111111111
0000000000000 0000000000000
1111111111111
1111111111111
0000000000000 1111111111111
0000000000000
0000000000000
1111111111111 0000000000000
1111111111111
1111111111111
0000000000000 1111111111111
0000000000000
0000000000000
1111111111111 0000000000000
1111111111111
0000000000000
1111111111111 1111111111111
0000000000000
1111111111111
0000000000000 1111111111111
0000000000000
1111111111111
0000000000000 0000000000000
1111111111111
1111111111111
0000000000000 0000000000000
1111111111111
1111111111111
0000000000000 1111111111111
0000000000000
1111111111111
0000000000000 0000000000000
1111111111111
1111111111111
0000000000000 0000000000000
1111111111111
0000000000000
1111111111111 0000000000000
1111111111111
1111111111111
0000000000000 0000000000000
1111111111111
1111111111111
0000000000000



FIGURE 11.4. Regions of absolute stability for A-stable (left) and (right) ‘-stable
methods

The following result, whose proof is given in [Wid67], establishes a relation
between the order of a multistep method, the number of its steps and its
stability properties.

Property 11.2 (Second Dahlquist barrier) A linear explicit multistep
method can be neither A-stable, nor ‘-stable. Moreover, there is no A-
stable linear multistep method with order greater than 2. Finally, for any
‘ ∈ (0, π/2), there only exist ‘-stable p-step linear multistep methods of
order p for p = 3 and p = 4.

Let us now examine the region of absolute stability of several MS methods.

The regions of absolute stability of both explicit and implicit Adams schemes
reduce progressively as the order of the method increases. In Figure 11.5
(left) we show the regions of absolute stability for the AB methods exam-
ined in Section 11.5.1, with exception of the Forward Euler method whose
region is shown in Figura 11.3.
The regions of absolute stability of the Adams-Moulton schemes, except
for the Crank-Nicolson method which is A-stable, are represented in Figure
11.5 (right).

In Figure 11.6 the regions of absolute stability of some of the BDF meth-
ods introduced in Section 11.5.2 are drawn. They are unbounded and al-
11.6 Analysis of Multistep Methods 501

4

3
0.8

0.6 2

0.4
1
0.2
0
AM4
0
AM5
AB4
’1
’0.2
AB3
’2 AM3
’0.4
AB2
’0.6 ’3
’0.8
’4
’2 ’1.5 ’1 ’0.5 0
’4 ’2 0
’6



FIGURE 11.5. Outer contours of the regions of absolute stability for
Adams-Bashforth methods (left) ranging from second to fourth-order (AB2, AB3
and AB4) and for Adams-Moulton methods (right), from third to ¬fth-order
(AM3, AM4 and AM5). Notice that the region of the AB3 method extends into
the half-plane with positive real part. The region for the explicit Euler (AB1)
method was drawn in Figure 11.3

ways contain the negative real numbers. These stability features make BDF
methods quite attractive for solving sti¬ problems (see Section 11.10).


6



4




BDF3
2



0



’2

BDF5
’4
BDF6
’6



2 4 6 8 10 12 14
’4 ’2 0




FIGURE 11.6. Inner contours of regions of absolute stability for three-step
(BDF3), ¬ve-step (BDF5) and six-step (BDF6) BDF methods. Unlike Adams
methods, these regions are unbounded and extend outside the limited portion
that is shown in the ¬gure


Remark 11.3 Some authors (see, e.g., [BD74]) adopt an alternative de¬-
nition of absolute stability by replacing (11.25) with the milder property
∃C > 0 : |un | ¤ C, as tn ’ +∞.
According to this new de¬nition, the absolute stability of a numerical
method should be regarded as the counterpart of the asymptotic stabil-
502 11. Numerical Solution of Ordinary Di¬erential Equations

ity (11.6) of the Cauchy problem. The new region of absolute stability A—
would be

A— = {z ∈ C : ∃C > 0, |un | ¤ C, ∀n ≥ 0}

and it would not necessarily coincide with A. For example, in the case of
the midpoint method A is empty (thus, it is unconditionally absolutely
unstable), while A— = {z = ±i, ± ∈ [’1, 1]}.
In general, if A is nonempty, then A— is its closure. We notice that zero-
stable methods are those for which the region A— contains the origin z = 0
of the complex plane.


To conclude, let us notice that the strong root condition (11.57) implies,
for a linear problem, that

∀h ¤ h0 , ∃ C > 0 : |un | ¤ C(|u0 | + . . . + |up |), ∀n ≥ p + 1. (11.66)

We say that a method is relatively stable if it satis¬es (11.66). Clearly,
(11.66) implies zero-stability, but the converse does not hold.
Figure 11.7 summarizes the main conclusions that have been drawn in this
section about stability, convergence and root-conditions, in the particular
case of a consistent method applied to the model problem (11.24).


⇐= Strong root ⇐=
Root Absolute root
condition condition condition



⇐’ ⇐= ⇐=
Convergence Zero (11.66) Absolute
stability stability


FIGURE 11.7. Relations between root conditions, stability and convergence for
a consistent method applied to the model problem (11.24)




11.7 Predictor-Corrector Methods
When solving a nonlinear Cauchy problem of the form (11.1), at each time
step implicit schemes require dealing with a nonlinear equation. For in-
stance, if the Crank-Nicolson method is used, we get the nonlinear equation
h
un+1 = un + [fn + fn+1 ] = Ψ(un+1 ),
2
11.7 Predictor-Corrector Methods 503

that can be cast in the form ¦(un+1 ) = 0, where ¦(un+1 ) = un+1 ’
Ψ(un+1 ). To solve this equation the Newton method would give
(k+1) (k) (k) (k)
un+1 = un+1 ’ ¦(un+1 )/¦ (un+1 ),
(0)
for k = 0, 1, . . . , until convergence and require an initial datum un+1 su¬-
ciently close to un+1 . Alternatively, one can resort to ¬xed-point iterations
(k+1) (k)
un+1 = Ψ(un+1 ) (11.67)

for k = 0, 1, . . . , until convergence. In such a case, the global convergence
condition for the ¬xed-point method (see Theorem 6.1) sets a constraint
on the discretization stepsize of the form
1
h< (11.68)
|b’1 |L
where L is the Lipschitz constant of f with respect to y. In practice, ex-
cept for the case of sti¬ problems (see Section 11.10), this restriction on
h is not signi¬cant since considerations of accuracy put a much more re-
strictive constraint on h. However, each iteration of (11.67) requires one
evaluation of the function f and the computational cost can be reduced by
(0)
providing a good initial guess un+1 . This can be done by taking one step of
an explicit MS method and then iterating on (11.67) for a ¬xed number m
of iterations. By doing so, the implicit MS method that is employed in the
¬xed-point scheme “corrects” the value of un+1 “predicted” by the explicit
MS method. A procedure of this sort is called a predictor-corrector method,
or PC method. There are many ways in which a predictor-corrector method
can be implemented.
(0)
In its basic version, the value un+1 is computed by an explicit p + 1-step
˜
method, called the predictor (here identi¬ed by the coe¬cients {˜j , ˜j })
ab
p
˜ p
˜
(0) (1) ˜j f (0) ,
[P ] un+1 = aj un’j
˜ +h b n’j
j=0 j=0

(0) (0) (1)
where fk = f (tk , uk ) and uk are the solutions computed by the PC
method at the previous steps or are the initial conditions. Then, we evaluate
(0)
the function f at the new point (tn+1 , un+1 ) (evaluation step)
(0) (0)
[E] fn+1 = f (tn+1 , un+1 ),
and ¬nally, one single ¬xed-point iteration is carried out using an implicit
MS scheme of the form (11.45)
p p
(1) (1) (0) (0)
[C] un+1 = aj un’j + hb’1 fn+1 +h bj fn’j .
j=0 j=0
504 11. Numerical Solution of Ordinary Di¬erential Equations

This second step of the procedure, which is actually explicit, is called the
corrector. The overall procedure is shortly denoted by P EC or P (EC)1
method, in which P and C denote one application at time tn+1 of the
predictor and the corrector methods, respectively, while E indicates one
evaluation of the function f .
This strategy above can be generalized supposing to perform m > 1 iter-
ations at each step tn+1 . The corresponding methods are called predictor-
(0)
multicorrector schemes and compute un+1 at time step tn+1 using the pre-
dictor in the following form
p
˜ p
˜
(0) (m) ˜j f (m’1) .
[P ] un+1 = aj un’j
˜ +h b n’j (11.69)
j=0 j=0


Here m ≥ 1 denotes the (¬xed) number of corrector iterations that are
carried out in the following steps [E], [C]: for k = 0, 1, . . . , m ’ 1
(k) (k)
[E] fn+1 = f (tn+1 , un+1 ),
p p
(k+1) (m) (k) (m’1)
[C] un+1 = aj un’j + hb’1 fn+1 +h bj fn’j .
j=0 j=0

These implementations of the predictor-corrector technique are referred to
as P (EC)m . Another implementation, denoted by P (EC)m E, consists of
updating at the end of the process also the function f and is given by
p
˜ p
˜
(0) (m) ˜j f (m) ,
[P ] un+1 = aj un’j
˜ +h b n’j
j=0 j=0

and for k = 0, 1, . . . , m ’ 1,
(k) (k)
[E] fn+1 = f (tn+1 , un+1 ),
p p
(k+1) (m) (k) (m)
[C] un+1 = aj un’j + hb’1 fn+1 +h bj fn’j ,
j=0 j=0

followed by
(m) (m)
[E] fn+1 = f (tn+1 , un+1 ).

Example 11.8 Heun™s method (11.10) can be regarded as a predictor-corrector
method whose predictor is the forward Euler method, while the corrector is the
Crank-Nicolson method.
Another example is provided by the Adams-Bashforth method of order 2
(11.50) and the Adams-Moulton method of order 3 (11.51). Its corresponding
11.7 Predictor-Corrector Methods 505

(0) (1) (0) (1) (0)
P EC implementation is: given u0 = u0 = u0 , u1 = u1 = u1 and f0 =
(0) (0) (0)
f (t0 , u0 ), f1 = f (t1 , u1 ), compute for n = 1, 2, . . . ,

h
(0) (1) (0)
3fn ’ fn’1 ,
(0)
[P ] un+1 = un +
2
(0) (0)
[E] fn+1 = f (tn+1 , un+1 ),

h
(1) (1) (0) (0)
5fn+1 + 8fn ’ fn’1 ,
(0)
[C] un+1 = un +
12

(0) (1) (0) (1)
while the P ECE implementation is: given u0 = u0 = u0 , u1 = u1 = u1 and
(1) (1) (1) (1)
f0 = f (t0 , u0 ), f1 = f (t1 , u1 ), compute for n = 1, 2, . . . ,

h
(0) (1) (1)
3fn ’ fn’1 ,
(1)
[P ] un+1 = un +
2
(0) (0)
[E] fn+1 = f (tn+1 , un+1 ),

h
(1) (1) (0) (1)
5fn+1 + 8fn ’ fn’1 ,
(1)
[C] un+1 = un +
12
(1) (1)
[E] fn+1 = f (tn+1 , un+1 ).





Before studying the convergence of predictor-corrector methods, we intro-
duce a simpli¬cation in the notation. Usually the number of steps of the
predictor is greater than those of the corrector, so that we de¬ne the num-
ber of steps of the predictor-corrector pair as being equal to the number of
steps of the predictor. This number will be denoted henceforth by p. Owing
to this de¬nition we no longer demand that the coe¬cients of the corrector
satisfy |ap | + |bp | = 0. Consider for example the predictor-corrector pair

(0) (1) (0)
[P ] un+1 = un + hf (tn’1 , un’1 ),
(1) (1) (0) (0)
h
[C] un+1 = un + f (tn , un ) + f (tn+1 , un+1 ) ,
2


for which p = 2 (even though the corrector is a one-step method). Conse-
quently, the ¬rst and the second characteristic polynomials of the corrector
method will be ρ(r) = r2 ’ r and σ(r) = (r2 + r)/2 instead of ρ(r) = r ’ 1
and σ(r) = (r + 1)/2.

In any predictor-corrector method, the truncation error of the predictor
combines with the one of the corrector, generating a new truncation error
which we are going to examine. Let q and q be, respectively, the orders of the
˜
predictor and the corrector and assume that y ∈ C q+1 , where q = max(˜, q).
q
506 11. Numerical Solution of Ordinary Di¬erential Equations

Then
p p
˜j f (tn’j , yn’j )
’ aj y(tn’j ) ’ h
y(tn+1 ) ˜ b
j=0 j=0
˜˜
= Cq+1 hq+1 y (˜+1) (tn ) + O(hq+2 ),
˜ q ˜


p p
’ aj y(tn’j ) ’ h
y(tn+1 ) bj f (tn’j , yn’j )
j=0 j=’1
(tn ) + O(hq+2 ),
q+1 (q+1)
= Cq+1 h y

˜˜
where Cq+1 , Cq+1 are the error constants of the predictor and the corrector
method respectively. The following result holds.

Property 11.3 Let the predictor method have order q and the corrector
˜
method have order q. Then:
If q ≥ q (or q < q with m > q ’ q ), then the predictor-corrector method
˜ ˜ ˜
has the same order and the same PLTE as the corrector.
If q < q and m = q ’ q , then the predictor-corrector method has the same
˜ ˜
order as the corrector, but di¬erent PLTE.
If q < q and m ¤ q ’ q ’ 1, then the predictor-corrector method has order
˜ ˜
equal to q + m (thus less than q).
˜
In particular, notice that if the predictor has order q ’ 1 and the corrector
has order q, the P EC su¬ces to get a method of order q. Moreover, the
P (EC)m E and P (EC)m schemes have always the same order and the same
PLTE.

Combining the Adams-Bashforth method of order q with the correspond-
ing Adams-Moulton method of the same order we obtain the so-called ABM
method of order q. It is possible to estimate its PLTE as
Cq+1 (m) (0)
un+1 ’ un+1 ,
— ’ Cq+1
Cq+1

where Cq+1 and Cq+1 are the error constants given in Table 11.1. Accord-
ingly, the steplength h can be decreased if the estimate of the PLTE exceeds
a given tolerance and increased otherwise (for the adaptivity of the step
length in a predictor-corrector method, see [Lam91], pp.128“147).

Program 93 provides an implementation of the P (EC)m E methods. The
input parameters at, bt, a, b contain the coe¬cients aj , ˜j (j = 0, . . . , p)
˜b ˜
of the predictor and the coe¬cients aj (j = 0, . . . , p), bj (j = ’1, . . . , p) of
the corrector. Moreover, f is a string containing the expression of f (t, y),
11.7 Predictor-Corrector Methods 507

h is the stepsize, t0 and tf are the end points of the time integration
interval, u0 is the vector of the initial data, m is the number of the corrector
inner iterations. The input variable pece must be set equal to ™y™ if the
P (EC)m E is selected, conversely the P (EC)m scheme is chosen.

Program 93 - predcor : Predictor-corrector scheme
function [u,t]=predcor(a,b,at,bt,h,f,t0,u0,tf,pece,m)
p = max(length(a),length(b)-1); pt = max(length(at),length(bt));
q = max(p,pt); if length(u0) < q, break, end;
t = [t0:h:t0+(q-1)*h]; u = u0; y = u0; fe = eval(f);
k = q;
for t = t0+q*h:h:tf
ut = sum(at.*u(k:-1:k-pt+1))+h*sum(bt.*fe(k:-1:k-pt+1));
y = ut; foy = eval(f);
uv = sum(a.*u(k:-1:k-p+1))+h*sum(b(2:p+1).*fe(k:-1:k-p+1));
k = k+1;
for j = 1:m
fy = foy; up = uv + h*b(1)*fy; y = up; foy = eval(f);
end
if (pece==™y™|pece==™Y™)
fe = [fe, foy];
else
fe = [fe, fy];
end
u = [u, up];
end
t = [t0:h:tf];


Example 11.9 Let us check the performance of the P (EC)m E method on the
Cauchy problem y (t) = e’y(t) for t ∈ [0, 1] with y(0) = 1. The exact solution is
y(t) = log(1 + t). In all the numerical experiments, the corrector method is the
Adams-Moulton third-order scheme (AM3), while the explicit Euler (AB1) and
the Adams-Bashforth second-order (AB2) methods are used as predictors. Figure
11.8 shows that the pair AB2-AM3 (m = 1) yields third-order convergence rate,
while AB1-AM3 (m = 1) has a ¬rst-order accuracy. Taking m = 2 allows to
recover the third-order convergence rate of the corrector for the AB1-AM3 pair.


As for the absolute stability, the characteristic polynomial of P (EC)m
methods reads
H m (1 ’ H)
ΠP (EC)m (r) = b’1 rp (ρ(r) ’ h»σ(r)) + (˜(r)σ(r) ’ ρ(r)˜ (r))
ρ σ
1 ’ Hm
while for P (EC)m E we have

H m (1 ’ H)
ΠP (EC)m E (r) = ρ(r) ’ h»σ(r) + (˜(r) ’ h»˜ (r)) .
ρ σ
1 ’ Hm
508 11. Numerical Solution of Ordinary Di¬erential Equations

’2
10



’4
10



’6
10



’8
10



’10
10



’12
10 ’3 ’2 ’1
10 10 10



FIGURE 11.8. Convergence rate for P (EC)m E methods as a function of log(h).
The symbol ∇ refers to the AB2-AM3 method (m = 1), —¦ to AB1-AM3 (m = 1)
and to AB1-AM3 with m = 2

We have set H = h»b’1 and denoted by ρ and σ the ¬rst and second charac-
˜ ˜
teristic polynomial of the predictor method, respectively. The polynomials
ρ and σ are related to the ¬rst and second characteristic polynomials of the
corrector, as previously explained after Example 11.8. Notice that in both
cases the characteristic polynomial tends to the corresponding polynomial
of the corrector method, since the function H m (1 ’ H)/(1 ’ H m ) tends to
zero as m tends to in¬nity.

Example 11.10 If we consider the ABM methods with a number of steps p, the
characteristic polynomials are ρ(r) = ρ(r) = r(rp’1 ’ rp’2 ), while σ(r) = rσ(r),
˜
where σ(r) is the second characteristic polynomial of the corrector. In Figure 11.9
(right) the stability regions for the ABM methods of order 2 are plotted. In the
case of the ABM methods of order 2, 3 and 4, the corresponding stability regions
can be ordered by size, namely, from the largest to the smallest one the regions of
P ECE, P (EC)2 E, the predictor and P EC methods are plotted in Figure 11.9,
left. The one-step ABM method is an exception to the rule and the largest region

is the one corresponding to the predictor method (see Figure 11.9, left).



11.8 Runge-Kutta (RK) Methods
When evolving from the forward Euler method (11.7) toward higher-order
methods, linear multistep methods (MS) and Runge-Kutta methods (RK)
adopt two opposite strategies.
Like the Euler method, MS schemes are linear with respect to both un
and fn = f (tn , un ), require only one functional evaluation at each time
step and their accuracy can be increased at the expense of increasing the
number of steps. On the other hand, RK methods maintain the structure
of one-step methods, and increase their accuracy at the price of an increase
of functional evaluations at each time level, thus sacrifying linearity.
11.8 Runge-Kutta (RK) Methods 509

2
1.5
2
P(EC) E
1.5
1
1

0.5
0.5


0
0
P(EC)2
PECE
PEC
’0.5
PEC

’0.5

’1

PECE
’1
’1.5 P(EC)2E



’2
’1.5
’2 ’1.5 ’1 ’0.5 0 0.5
’1.5 ’1 ’0.5 0 0.5



FIGURE 11.9. Stability regions for the ABM methods of order 1 (left) and 2
(right)

A consequence is that RK methods are more suitable than MS methods
at adapting the stepsize, whereas estimating the local error for RK methods
is more di¬cult than it is in the case of MS methods.
In its most general form, an RK method can be written as

n≥0
un+1 = un + hF (tn , un , h; f ), (11.70)

where F is the increment function de¬ned as follows
s
F (tn , un , h; f ) = bi Ki ,
i=1
(11.71)
s
Ki = f (tn + ci h, un + h aij Kj ), i = 1, 2, . . . , s
j=1


and s denotes the number of stages of the method. The coe¬cients {aij },
{ci } and {bi } fully characterize an RK method and are usually collected in
the so-called Butcher array

c1 a11 a12 ... a1s
c2 a21 a22 a2s c A
. . .
..
. . . or
.
. . .
bT
cs as1 as2 ... ass
b1 b2 ... bs

T T
where A = (aij ) ∈ Rs—s , b = (b1 , . . . , bs ) ∈ Rs and c = (c1 , . . . , cs ) ∈
Rs . We shall henceforth assume that the following condition holds
s
ci = aij i = 1, . . . , s. (11.72)
j=1
510 11. Numerical Solution of Ordinary Di¬erential Equations

If the coe¬cients aij in A are equal to zero for j ≥ i, with i = 1, 2, . . . , s,
then each Ki can be explicitly computed in terms of the i ’ 1 coe¬cients
K1 , . . . , Ki’1 that have already been determined. In such a case the RK
method is explicit. Otherwise, it is implicit and solving a nonlinear system
of size s is necessary for computing the coe¬cients Ki .
The increase in the computational e¬ort for implicit schemes makes their
use quite expensive; an acceptable compromise is provided by RK semi-
implicit methods, in which case aij = 0 for j > i so that each Ki is the
solution of the nonlinear equation
« 
i’1
Ki = f tn + ci h, un + haii Ki + h aij Kj  .
j=1


A semi-implicit scheme thus requires s nonlinear independent equations to
be solved.

The local truncation error „n+1 (h) at node tn+1 of the RK method
(11.70) is de¬ned through the residual equation

h„n+1 (h) = yn+1 ’ yn ’ hF (tn , yn , h; f ),

where y(t) is the exact solution to the Cauchy problem (11.1). Method
(11.70) is consistent if „ (h) = maxn |„n (h)| ’ 0 as h ’ 0. It can be shown
(see [Lam91]) that this happens i¬
s
bi = 1.
i=1

As usual, we say that (11.70) is a method of order p (≥ 1) with respect to
h if „ (h) = O(hp ) as h ’ 0.
As for convergence, since RK methods are one-step methods, consistency
implies stability and, in turn, convergence. As happens for MS methods,
estimates of „ (h) can be derived; however, these estimates are often too
complicated to be pro¬tably used. We only mention that, as for MS meth-
ods, if an RK scheme has a local truncation error „n (h) = O(hp ), for any
n, then also the convergence order will be equal to p.
The following result establishes a relation between order and number of
stages of explicit RK methods.

Property 11.4 The order of an s-stage explicit RK method cannot be
greater than s. Also, there do not exist s-stage explicit RK methods with
order s ≥ 5.
We refer the reader to [But87] for the proofs of this result and the results we
give below. In particular, for orders ranging between 1 and 10, the minimum
11.8 Runge-Kutta (RK) Methods 511

number of stages smin required to get a method of corresponding order is
shown below
order 1 2 3 4 5 6 7 8
smin 1 2 3 4 6 7 9 11
Notice that 4 is the maximum number of stages for which the order of the
method is not less than the number of stages itself. An example of a fourth-
order RK method is provided by the following explicit 4-stage method
h
un+1 = un + (K1 + 2K2 + 2K3 + K4 )
6
K1 = fn ,
(11.73)
K2 = f (tn + h , un + h K1 ),
2 2

K3 = f (tn + h , un + h K2 ),
2 2

K4 = f (tn+1 , un + hK3 ).
As far as implicit schemes are concerned, the maximum achievable order
using s stages is equal to 2s.

Remark 11.4 (The case of systems of ODEs) An RK method can be
readily extended to systems of ODEs. However, the order of an RK method
in the scalar case does not necessarily coincide with that in the vector
case. In particular, for p ≥ 4, a method having order p in the case of the
autonomous system y = f (y), with f : Rm ’ Rn maintains order p even
when applied to an autonomous scalar equation y = f (y), but the converse
is not true. Regarding this concern, see [Lam91], Section 5.8.


11.8.1 Derivation of an Explicit RK Method
The standard technique for deriving an explicit RK method consists of en-
forcing that the highest number of terms in Taylor™s expansion of the exact
solution yn+1 about tn coincide with those of the approximate solution
un+1 , assuming that we take one step of the RK method starting from the
exact solution yn . We provide an example of this technique in the case of
an explicit 2-stage RK method.
Let us consider a 2-stage explicit RK method and assume to dispose at
the n-th step of the exact solution yn . Then
un+1 = yn + hF (tn , yn , h; f ) = yn + h(b1 K1 + b2 K2 ),

K1 = fn , K2 = f (tn + hc2 , yn + hc2 K1 ),

having assumed that (11.72) is satis¬ed. Expanding K2 in a Taylor series
in a neighborhood of tn and truncating the expansion at the second order,
512 11. Numerical Solution of Ordinary Di¬erential Equations

<<

. 20
( 26)



>>