For n time steps and q h = 0 we have

n

e’Ah „ wi = e’»i tn wi ∼ R(’»i „ )n w i . (7.79)

Thus, the restriction to eigenvectors wi with eigenvalues »i has diagonal-

ized the system of ordinary di¬erential equations (7.41) for q h = 0 to the

scalar problems

t ∈ (0, T ) ,

ξ + »i ξ = 0, (7.80)

ξ(0) = ξ0

(for ξ0 = 1) with its solution ξ(t) = e’»i tξ0 , for which the one-step-theta

method gives the approximation

ξn+1 = R(’»i „ )ξn = (R(’»i „ ))n+1 ξ0 (7.81)

at t = tn+1 . A basic requirement for a discretion method is the following:

De¬nition 7.17 A one-step method is called nonexpansive if for two nu-

merical approximations un and un , generated under the same conditions

˜h

h

˜

except for two discrete initial values u0 and u0 , respectively, the following

estimate is valid:

|un+1 ’ un+1 | ¤ |un ’ un | , n ∈ {0, . . . , N ’ 1} .

˜h ˜

h

A recursive application of this estimate immediately results in

|un ’ un | ¤ |u0 ’ u0 | , n ∈ {1, . . . , N } .

˜ ˜

Here a general one-step method has the form

n ∈ {0, . . . , N ’ 1} ,

un+1 = un + „ ¦(„, tn , un ) ,

h h

h

with u0 = u0 and a so-called generating function ¦ : R+ — [0, T ) — RM ’

h

RM that characterizes the particular method. The generating function of

7.4. Stability 317

the one-step-theta method applied to the system (7.41) is

¦(„, t, ξ) = ’(I + „ ˜Ah )’1 [Ah ξ ’ q h (t + ˜„ )] .

Thus nonexpansiveness models the fact that perturbances, i.e., in partic-

ular errors, are not ampli¬ed in time by the numerical method. This is

considerably weaker than the exponential decay in the continuous solution

(see (7.18)), which would be too strong a request.

Having in mind (7.79)“(7.81), and expecting the (real parts of the) ei-

genvalues to be positive, the following restriction is su¬cient:

De¬nition 7.18 A one-step method is called A-stable if its application to

the scalar model problem (7.80)

ξ + »ξ = 0 , t ∈ (0, T ) ,

ξ(0) = ξ0 ,

yields a nonexpansive method for all complex parameters » with »>0

and arbitrary step sizes „ > 0.

Because of (7.81) we have

˜ ˜

ξn+1 ’ ξn+1 = R(’»„ )[ξn ’ ξn ]

for two approximations of the one-step-theta method applied to (7.80).

This shows that the condition

|R(z)| ¤ 1 for all z with z<0

is su¬cient for the A-stability of the method. More generally, any one-step

method that can be written for (7.80) in the form

ξn+1 = R(’»i „ )ξn (7.82)

is nonexpansive i¬

|R(’»i „ )| ¤ 1 . (7.83)

The one-step-theta method is nonexpansive for (7.41) in the case of an

eigenvector basis if (7.83) holds for all eigenvalues »i and step size „. A

convenient formulation can be achieved by the notion of the domain of

stability.

De¬nition 7.19 Given a stability function R : C ’ C, the set

SR := {z ∈ C : |R(z)| < 1}

is called a domain of (absolute) stability of the one-step method ξn+1 =

R(’»„ )ξn .

Example 7.20

For the one-step-theta method we have:

(1) For ˜ = 0, SR is the (open) unit disk with centre z = ’1.

318 7. Discretization of Parabolic Problems

(2) For ˜ = 1 , SR coincides with the left complex half-plane (except for

2

the imaginary axis).

(3) For ˜ = 1, SR is the whole complex plane except for the closed unit

disk with centre z = 1.

The notion of A-stability re¬‚ects the fact that the property |e’»„ | ¤ 1 for

» > 0 is satis¬ed by the function R(’»„ ), too:

Corollary 7.21 For a continuous stability function R the one-step method

ξ n+1 = R(’»„ )ξ n is A-stable if the closure S R of its domain of stability

contains the left complex half-plane.

Thus the Crank“Nicolson and the implicit Euler methods are A-stable,

but not the explicit Euler method. To have nonexpansiveness, we need the

requirement

|1 ’ »i „ | = |R(’»i „ )| ¤ 1 , (7.84)

which is a step size restriction: For positive »i it reads

„ ¤ 2/ max{»i | i = 1, . . . , M } . (7.85)

For the example of the ¬ve-point stencil discretization of the heat equation

on a rectangle with Dirichlet boundary conditions according to (7.37)“

(7.39), equation (7.84) reads

„ π π

1’ 2 2 ’ cos ν h ’ cos µ h ¤1 (7.86)

h2 a b

for all ν = 1, . . . , l ’ 1, µ = 1, . . . , m ’ 1.

The following condition is su¬cient (and for l, m ’ ∞ also necessary):

„ 1

¤. (7.87)

2

h 4

For the ¬nite element method a similar estimate holds in a more general

context. Under the assumptions of Theorem 3.45 we conclude from its proof

(see (3.141)) that the following holds:

’2

max{»i | i = 1, . . . , M } ¤ C min hK

K∈Th

’1 ˆ T

for the eigenvalues of Bh Ah , where Bh = Eh Eh is the mass matrix and

’1 ˆ ’1

ˆ

Ah the sti¬ness matrix, and thus also for Ah = Eh Bh Ah Eh . Here C is

a constant independent of h.

Therefore, we have

2

¤ 2/C

„ min hK (7.88)

K∈Th

as a su¬cient condition for the nonexpansiveness of the method with a

speci¬c constant depending on the stability constant of the bilinear form

and the constant from Theorem 3.43, (2).

7.4. Stability 319

These step size restrictions impede the attractivity of the explicit Euler

method, and so implicit versions are often used. But also in the A-stable

case there are distinctions in the behaviour (of the stability functions).

Comparing them, we see that

R(’x) ’ ’1 x ’ ∞;

1

for ˜= : for

2

(7.89)

R(’x) ’ 0 x ’ ∞.

for ˜=1: for

This means that for the implicit Euler method the in¬‚uence of large eigen-

values will be more greatly damped, the larger they are, corresponding

to the exponential function to be approximated, but the Crank“Nicolson

method preserves these components nearly undamped in an oscillatory

manner. This may lead to a problem for “rough” initial data or discontinu-

ities between initial data and Dirichlet boundary conditions. On the other

hand, the implicit Euler method also may damp solution components too

strongly, making the solution “too” smooth.

The corresponding notion is the following:

De¬nition 7.22 One-step methods whose stability function satis¬es

R(z) ’ 0 for z ’ ’∞,

are called L-stable.

An intermediate position is ¬lled by the strongly A-stable methods. They

are characterized by the properties

• |R(z)| < 1 for all z with z < 0,

• |R(z)| < 1.

lim

z’’∞

Example 7.23

(1) Among the one-step-theta methods, only the implicit Euler method

(˜ = 1) is L-stable.

(2) The Crank“Nicolson method (˜ = 1 ) is not strongly A-stable,

2

because of (7.89).

The nonexpansiveness of a one-step method can also be characterized by

a norm condition for the solution operator Eh,„ .

Theorem 7.24 Let the spatial discretization matrix Ah have a basis

of eigenvectors w i orthogonal with respect to the scalar product ·, · h ,

with eigenvalues »i , i = 1, . . . , M. Consider the problem (7.41) and

its discretization in time by a one-step method with a linear solution

representation

un = Eh,„ u0

n

(7.90)

h

for q h = 0, where Eh,„ ∈ RM,M , and a stability function R such that (7.82)

and

Eh,„ wi = R(’»i „ )w i (7.91)

320 7. Discretization of Parabolic Problems

for i = 1, . . . , M . Then the following statements are equivalent:

(1) The one-step method is nonexpansive for the model problem (7.80)

and all eigenvalues »i of Ah .

(2) The one-step method is nonexpansive for the problem (7.41), with

respect to the norm · h induced by ·, · h .

(3) Eh,„ h ¤ 1 in the matrix norm · h induced by the vector norm

· h.

Proof: We prove (1) ’ (3) ’ (2) ’ (1):

(1) ’ (3): According to (7.83) (1) is characterized by

|R(’»i „ )| ¤ 1 , (7.92)

for the eigenvalues »i .

For the eigenvector, wi with eigenvalue »i we have (7.91), and thus, for

an arbitrary u0 = M ci wi ,

i=1

M

2 2

Eh,„ u0 = ci Eh,„ wi

h h

i=1

M M

c2 |R(’»i „ )|2 wi

ci R(’»i „ )w i 2 2

= = h,

i

h

i=1 i=1

because of the orthogonality of the w i , and analogously,

M

u0 2 c2 w i 2

= h,

h i

i=1

and ¬nally, because of (7.92),

M

¤

Eh,„ u0 2 c2 w i 2 2

= u0 h,

h i h

i=1

which is assertion (3).

(3) ’ (2): is obvious.

(2) ’ (3):

|R(’»i „ )| wi ¤ wi

= R(’»i „ )w i = Eh,„ w i h.

h h h

2

Thus, nonexpansiveness is often identical to what is (vaguely) called

stability:

De¬nition 7.25 A one-step method with a solution representation Eh,„

for qh = 0 is called stable with respect to the vector norm · h if

¤1

Eh,„ h

·

in the induced matrix norm h.

7.4. Stability 321

Till now we have considered only homogeneous boundary data and right-

hand sides. At least for the one-step-theta method this is not a restriction:

Theorem 7.26 Consider the one-step-theta method under the assumption

of Theorem 7.24, with »i ≥ 0, i = 1, . . . , M, and with „ such that the

method is stable. Then the solution is stable in initial condition u0 and

right-hand side q h in the following sense:

n

¤ u0 q h tk ’ ˜„

un h +„ . (7.93)

h

h h

k=1

Proof: From the solution representation (7.70) we conclude that

n

(I + „ ˜Ah )’1

¤ q h (tk ’ ˜„ )

n’k

un h Eh,„ n u0 h+„ Eh,„ h h

h h h

k=1

(7.94)

using the submultiplicativity of the matrix norm.

We have the estimate

1

(I + ˜„ Ah )’1 wi ¤

= wi wi h,

h h

1 + ˜ „ »i

and thus as in the proof of Theorem 7.24, (1) ’ (3),

(I + ˜ „ Ah )’1 ¤1

h

2

concludes the proof.

1

The stability condition requires step size restrictions for ˜ < 2, which

have been discussed above for ˜ = 0.

The requirement of stability can be weakened to

¤ 1 + K„

Eh,„ (7.95)

h

for some constant K > 0, which in the situation of Theorem 7.24 is equi-

valent to

|R(’»„ )| ¤ 1 + K„,

for all eigenvalues » of Ah . Because of

(1 + K„ )n ¤ exp(Kn„ ),

in (7.93) the additional factor exp(KT ) appears and correspondingly

exp(K(n ’ k)„ ) in the sum. If the process is to be considered only in a

small time interval, this becomes part of the constant, but for large time

horizons the estimate becomes inconclusive.

On the other hand, for the one-step-theta method for 1 < ˜ ¤ 1 the

2

estimate Eh,„ h ¤ 1 and thus the constants in (7.93) can be sharpened

to Eh,„ h ¤ R(’»min „ ), where »min is the smallest eigenvalue of Ah ,

322 7. Discretization of Parabolic Problems

re¬‚ecting the exponential decay. For example, for ˜ = 1, the (error in the)

initial data is damped with the factor

1

n

= R(’»min „ )n =

Eh,„ ,

h

(1 + »min „ )n

which for „ ¤ „0 for some ¬xed „0 > 0 can be estimated by

exp(’»n„ ) for some » > 0.

We conclude this section with an example.

Example 7.27 (Prothero-Robinson model) Let g ∈ C 1 [0, T ] be given.

We consider the initial value problem

ξ + »(ξ ’ g) = t ∈ (0, T ) ,

g,

ξ(0) = ξ0 .

Obviously, g is a particular solution of the di¬erential equation, so the

general solution is

ξ(t) = e’»t [ξ0 ’ g(0)] + g(t) .

In the special case g(t) = arctan t , » = 500, and for the indicated values of

ξ0 , Figure 7.1 shows the qualitative behaviour of the solution.

400

50

0

-100

Figure 7.1. Prothero“Robinson model.

It is worth mentioning that the ¬gure is extremely scaled: The continuous

line (to ξ0 = 0) seems to be straight, but it is the graph of g.

The explicit Euler method for this model is

ξ n+1 = (1 ’ »„ )ξ n + „ [g (tn ) + »g(tn )] .

7.5. The Maximum Principle for the One-Step-Theta Method 323

According to the above considerations, it is nonexpansive only if »„ ¤ 1

holds. For large numbers », this is a very restrictive step size condition; see

also the discussion of (7.85) to (7.87).

Due to their better stability properties, implicit methods such as the

Crank“Nicolson and the implicit Euler methods do not have such step

size restrictions. Nevertheless, the application of implicit methods is not

free from surprises. For example, in the case of large numbers », an order

reduction can occur.

Exercises

7.10 Determine the corresponding domain of stability SR of the one-step-

theta method for the following values of the parameter ˜ : 0, 1 , 1.

2

7.11 Show the L-stability of the implicit Euler method.

7.12 (a) Show that the discretization

ξ n = ξ n’2 + 2„ f (tn’1 , ξ n’1 ) , n = 2, . . . N

(midpoint rule), applied to the model equation ξ = f (t, ξ) with

f (t, ξ) = ’»ξ and » > 0 leads, for a su¬ciently small step size

„ > 0, to a general solution that can be additively decomposed into a

decaying and an increasing (by absolute value) oscillating component.

(b) Show that the oscillating component can be damped if additionally

N

the quantity ξ— is computed (modi¬ed midpoint rule):

1N

ξ + ξ N ’1 + „ f (tN , ξ N ) .

N

ξ— =

2

7.13 Let m ∈ N be given. Find a polynomial Rm (z) = 1 + z +

m

j=2 γj z (γj ∈ R) such that the corresponding domain of absolute sta-

j

bility for R(z) := Rm (z) contains an interval of the negative real axis that

is as large as possible.

7.5 The Maximum Principle for the

One-Step-Theta Method

In Section 1.4 we have seen that for a discrete problem of the form (1.31)

there is a hierarchy of properties ranging from a comparison principle to

a strong maximum principle, which is in turn applied by a hierarchy of

conditions, partly summarized as (1.32) or (1.32)— . To remind the reader,

we regroup these conditions accordingly:

324 7. Discretization of Parabolic Problems

The collection of conditions (1.32), (1), (2), (3) i), (4)— is called (IM).

(IM) implies the inverse monotonicity of Ah (Theorem 1.12, (1.39)).

The collection of conditions (IM), (5) is called (CP).

(CP) implies a comparison principle in the sense of Corollary 1.13.

The collection of conditions (CP), (6)— is called (M P )— .

(M P )— implies a maximum principle in the form of Theorem 1.10 (1.38).

Alternatively, the collection of conditions (CP) (6)# (see Exercise 1.13) is

called (MP).

(MP) implies a maximum principle in the form of Theorem 1.9 (1.34).

Finally, the collection of conditions (CP), (6), (4) (instead of (4)— ), (7) is

called (SMP).

(SMP) implies a strong maximum principle in the sense of Theorem 1.9.

An L∞ -stability estimate in the sense of Theorem 1.14 is closely related.

This will be taken up in the next section.

In the following we will discuss the above-mentioned properties for the

one-step-theta method, cast into the form (1.31), on the basis of correspond-

ing properties of the underlying elliptic problem and its discretization. It

will turn out that under a reasonable condition (see (7.100)), condition (4)—

(and thus (3) ii)) will not be necessary for the elliptic problem. This re¬‚ects

the fact that contrary to the elliptic problem, for the parabolic problem also

the case of a pure Neumann boundary condition (where no degrees of free-

dom are given and thus eliminated) is allowed, since the initial condition

acts as a Dirichlet boundary condition.

In assuming that the discretization of the underlying elliptic problem is

of the form (1.31), we return to the notation M = M1 + M2 , where M2 is

the number of degrees of freedom eliminated, and thus Ah , Bh ∈ RM1 ,M1 .

We write the discrete problem according to (7.66) as one large system of

equations for the unknown

«

u1h

¬ ·

u2

¬ ·

h

uh = ¬ ·, (7.96)

.

.

.

uN

h

in which the vector of grid values ui ∈ RM1 are collected to one large

h

vector of dimension M 1 := N · M1 . Thus the grid points in „¦ — (0, T ) are

the points (xj , tn ), n = 1, . . . , N , xj ∈ „¦h , e.g., for the ¬nite di¬erence

method. The de¬ning system of equations has the form

Ch uh = ph , (7.97)

7.5. Maximum Principle for the One-Step-Theta Method 325

where

«

I + „ ˜Ah

¬ ·

..

0

¬ ’I + „ ˜Ah ·

.

¬ ·

¬ ·

.. ..

Ch = ¬ ·,

. .

¬ ·

¬ ·

.. ..

0

. .

’I + „ ˜Ah I + „ ˜Ah

again with ˜ := 1 ’ ˜,

«

„ q h (˜„ ) + (I ’ „ ˜Ah )u0

¬ ·

¬ ·

„ q h ((1 + ˜)„ )

¬ ·

¬ ·

.

ph = ¬ ·.

.

¬ ·

.

¬ ·

.

.

.

„ q h (N ’ 1 + ˜)„ )

Since the spatial discretization is performed as in the stationary case, and

in the nth step the discretization relates to t = tn’1 + ˜„ and also the

approximation

Ah u(tn’1 + ˜„ ) ∼ ˜Ah u(tn’1 ) + ˜Ah u(tn )

enters the formulation (7.66), we can assume to have the following structure

of the right-hand side of (7.66):

q h ((n ’ 1 + ˜)„ ) = ’Ah (˜un’1 + ˜un ) + f ((n ’ 1 + ˜)„ ) for n = 1, . . . , N.

ˆ ˆh ˆh

(7.98)

Here the uh ∈ R

n M2

ˆ are the known spatial boundary values on time level

tn , which have been eliminated from the equation as explained, e.g., in

Chapter 1 for the ¬nite di¬erence method. But as noted, we allow also for

the case where such values do not appear (i.e., M2 = 0) then (7.98) reduces

to

q h ((n ’ 1 + ˜)„ ) = f ((n ’ 1 + ˜)„ ) for n = 1, . . . , N .

For the continuous problem, data are prescribed at the parabolic boundary

„¦ — {0} ∪ ‚„¦ — [0, T ]; correspondingly, the known values ui are collected

ˆh

with the initial data u0 ∈ R M1

to a large vector

«

u0

¬ u0 ·

¬ ˆh ·

¬ ˆ1 ·

uh = ¬ uh · ,

ˆ

¬.·

. .

uN

ˆh

326 7. Discretization of Parabolic Problems

i.e., a vector of dimension M 2 := M1 + (N + 1)M2 , which may reduce to

uh = u0 ∈ RM1 .

ˆ

With this notation we have

ph = ’Ch uh + e

ˆˆ (7.99)

if we de¬ne

«

’I + „ ˜Ah ˆ ˆ

„ ˜Ah „ ˜Ah O

¬ ·

.

.. ..

¬ ·

.

. .

ˆh = ¬ ·

O .

¬ ·,

C . .

¬ ·

.. ..

. .

. .

. .

ˆ ˆ

O „ ˜Ah „ ˜Ah

«

„ f (˜„ )

¬ ·

¬ ·

„ f ((1 + ˜)„ )

¬ ·

¬ ·

.

e=¬ ·.

.

.

¬ ·

¬ ·

.

.

.

„ f ((N ’ 1 + ˜)„ )

In the following the validity of (1.32)— or (1.32) for

˜ ˆ

Ch = (Ch , Ch )

will be investigated on the basis of corresponding properties of

˜ ˆ

Ah = (Ah , Ah ) .

Note that even if Ah is irreducible, the matrix Ch is always reducible,

since un depends only on u1 , . . . , un’1 , but not on the future time levels.

h h h

(Therefore, (7.97) serves only for the theoretical analysis, but not for the

actual computation.)

In the following we assume that

„ ˜(Ah )jj < 1 for j = 1, . . . , M1 , (7.100)

which is always satis¬ed for the implicit Euler method (˜ = 1). Then:

(1) (Ch )rr > 0 f or r = 1, . . . , M 1

holds if (1) is valid for Ah . Actually, also (Ah )jj > ’1/(„ ˜) would

be su¬cient.

(2) (Ch )rs ¤ 0 f or r, s = 1, . . . , M 1 , r = s:

If (2) is valid for Ah , then only the nonpositivity of the diagonal

elements of the o¬-diagonal block of Ch , ’I + „ ˜Ah , is in question.

This is ensured by (7.100) (weakened to “¤”).

M1

≥ 0 f or r = 1, . . . , M 1 :

(3) (i) Cr := Ch

rs

s=1

7.5. Maximum Principle for the One-Step-Theta Method 327

(ii) Cr > 0 f or at least one r ∈ {1, . . . , M 1 }:

We set

M1

Aj := (Ah )jk ,

k=1

so that condition (3) (i) for Ah means that Aj ≥ 0 for j = 1, . . . , M1 .

Therefore, we have

Cr = 1 + „ ˜Aj > 0 (7.101)

for the indices r of the ¬rst time level, where the “global” index r

corresponds to the “local” spatial index j. For the following time

levels, the relation

Cr = 1 ’ 1 + „ (˜ + ˜)Aj = „ Aj ≥ 0 (7.102)

holds, i.e., (3) (i) and (ii).

(4)— For every r1 ∈ {1, . . . , M 1 } satisfying

M1

(Ch )rs = 0 (7.103)

r=1

there exist indices r2 , . . . , rl+1 such that

(Ch )ri ri+1 = 0 for i = 1, . . . , l

and

M1

(Ch )rl+1 s > 0 . (7.104)

s=1

To avoid too many technicalities, we adopt the background of a ¬nite

di¬erence method. Actually, only matrix properties enter the reason-

ing. We call (space-time) grid points satisfying (7.103) far from the

boundary, and those satisfying (7.104) close to the boundary. Due to

(7.101), all points of the ¬rst time level are close to the boundary

(consistent with the fact that the grid points for t0 = 0 belong to the

parabolic boundary). For the subsequent time level n, due to (7.102),

a point (xi , tn ) is close to the boundary if xi is close to the bound-

ary with respect to Ah . Therefore, the requirement of (4)— , that a

˜

point far from the boundary can be connected via a chain of neigh-

bours to a point close to the boundary, can be realized in two ways:

Firstly, within the time level n, i.e., the diagonal block of Ch if Ah

satis¬es condition (4)— . Secondly, without this assumption a chain of

neighbours exist by (x, tn ), (x, tn’1 ) up to (x, t1 ), i.e., a point close

to the boundary, since the diagonal element of ’I + „ ˜Ah does not

vanish due to (7.100). This reasoning additionally has established the

following:

328 7. Discretization of Parabolic Problems

(4)# If Ah is irreducible, then a grid point (x, tn ), x ∈ „¦h can be connected

via a chain of neighbours to every grid point (y, tk ), y ∈ „¦h and

0 ¤ k ¤ n.

(5) (Ch )rs ¤ 0 for r = 1, . . . , M 1 , s = M 1 + 1, . . . , M 2 :

ˆ

ˆ

Analogously to (2), this follows from (5) for Ah and (7.100).

M

(6)— Cr :=

˜ ˜

(Ch )rs = 0 for r = 1, . . . , M :

s=1

Analogously to (7.102), we have

M

˜ ˜ ˜

Cr = „ Aj := „ (Ah )jk ,

k=1

˜

so that the property is equivalent to the corresponding one of Ah .

(6) Cr ≥ 0 for r = 1, . . . , M

˜

˜

is equivalent to (6) for Ah by the above argument.

(7) For every s ∈ M 1 + 1, . . . , M there exists an r ∈ {1, . . . , M 1 } such

ˆ

that (Ch )rs = 0:

Every listed boundary value should in¬‚uence the solution: For the

ˆ

values from u0 , . . . , uN this is the case i¬ Ah satis¬es this property.

ˆh ˆh

Furthermore, the “local” indices of the equation, where the boundary

values appear, are the same for each time level. For the values from

u0 ∈ RM1 the assertion follows from (7.100).

From the considerations we have the following theorem:

Theorem 7.28 Consider the one-step-theta method in the form (7.66).

ˆ

Let (7.100) hold. If the spatial discretization Ah satis¬es (1.32) (1), (2),

(3) (i), and (5), then a comparison principle holds:

(1) If for two sets of data f i , u0i and un , n = 0, . . . , N and i = 1, 2, we

ˆ hi

have

f 1 ((n ’ 1 + ˜)„ ) ¤ f 2 ((n ’ 1 + ˜)„ ) for n = 1, . . . , N ,

and

u01 ¤ u02 ; , un ¤ un

ˆ h1 ˆ h1 for n = 0, . . . , N,

then

un ¤ un

ˆ h1 ˆ h2 for n = 1, . . . , N

for the corresponding solutions.

If un = un for n = 1, . . . , N, then condition (1.32) (5) can be

ˆ h1 ˆ h2

omitted.

7.5. Maximum Principle for the One-Step-Theta Method 329

(2) If Ah additionally satis¬es (1.32) (6)— , then the following weak

˜

maximum principle holds:

(ˆ n )r ,

(˜ n )r ¤ max

max uh max (u0 )r , max uh

r∈{1,...,M1 }

r∈{1,...,M } r∈{M1 +1,...,M }

n=0,...,N n=0,...,N

where

un

un := h

˜h .

ˆ

uh

˜

(3) If Ah satis¬es (1.32) (1), (2), (3) (i), (4), (5), (6), (7), then a strong

maximum principle in the following sense holds:

If the components of un , n = 0, . . . , N, attain a nonnegative maxi-

˜h

mum for some spatial index r ∈ {1, . . . , M1 } and at some time level

k ∈ {1, . . . , N }, then all components for the time levels n = 0, . . . , k

are equal.

Proof: Only part (3) needs further consideration. Theorem 1.9 cannot

be applied directly to (7.97), since Ch is reducible. Therefore, the proof of

Theorem 1.9 has to be repeated: We conclude that the solution is constant

at all points that are connected via a chain of neighbours to the point where

the maximum is attained. According to (4)# these include all grid points

(x, tl ) with x ∈ „¦h and l ∈ {0, . . . , k}. From (7.100) and the discussion of

(7) we see that the connection can also be continued to boundary values

2

up to level k.

The additional condition (7.100), which may be weakened to nonstrict

inequality, as seen above, actually is a time step restriction: Consider again

the example of the ¬ve-point stencil discretization of the heat equation on

a rectangle, for which we have (Ah )jj = 4/h2 . Then the condition takes

the form

„ 1

< (7.105)

4(1 ’ ˜)

2

h

for ˜ < 1. This is very similar to the condition (7.87), (7.88) for the explicit

Euler method, but the background is di¬erent.

As already noted, the results above also apply to the more general form

(7.67) under the assumption (7.69). The condition (7.100) then takes the

form

„ ˜(Ah )jj ¤ bj for j = 1, . . . , M1 .

Exercises

7.14 Formulate the results of this section, in particular condition (7.100),

for the problem in the form (7.67) with Bh according to (7.69) (i.e.

330 7. Discretization of Parabolic Problems

appropriate for ¬nite element discretizations with mass lumping, see

(7.74)).

7.15 Show the validity of (6)# from Exercise 1.13 for Ch if it holds here

for Ah and conclude as in Exercise 1.13 a weak maximum principle for the

one-step-theta method.

7.16 Consider the initial-boundary value problem in one space dimension

±

ut ’ µuxx + cux = f in (0, 1) — (0, T ) ,

u(0, t) = g’ (t), u(1, t) = g+ (t) , t ∈ (0, T ) ,

u(x, 0) = u0 (x) , x ∈ (0, 1) ,

where T > 0 and µ > 0 are constants, and c, f : (0, 1) — (0, T ) ’ R,

u0 : (0, 1) ’ R, and g’ , g+ : (0, T ) ’ R are su¬ciently smooth functions

such that the problem has a classical solution.

De¬ne h := 1/m and „ = T /N for some numbers m, N ∈ N. Then the so-

called full-upwind ¬nite di¬erence method for this problem reads as follows:

Find a sequence of vectors u0 , . . . , uN by

h h

un+1 ’ 2un+1 + un+1 ’ ui+1 ’ ui ’ un+1

n+1 n+1 n+1

un+1 ’ un + ui

i+1 i i’1 i’1

’µ ’c

i

i

+c

2

„ h h h

i = 1, . . . , m ’ 1, n = 0, . . . , N ’ 1,

n+1

= fi ,

where c = c+ ’ c’ with c+ = max{c, 0}, fin = f (ih, n„ ), u0 = u0 (ih),

i

n n

u0 = g’ (n„ ) and um = g+ (n„ ).

Prove that a weak maximum principle holds for this method.

7.6 Order of Convergence Estimates

Based on stability results already derived, we will investigate the (order

of) convergence properties of the one-step-theta method for di¬erent dis-

cretization approaches. Although the results will be comparable, they will

be in di¬erent norms, appropriate for the speci¬c discretization method, as

already seen in Chapters 1, 3, and 6.

Order of Convergence Estimates for the Finite Di¬erence Method

From Section 1.4 we know that the investigation of the (order of)

convergence of a ¬nite di¬erence method consists of two ingredients:

• (order of) convergence of the consistency error

• stability estimates.

The last tool has already been provided by Theorem 7.26 and by Theo-

rem 1.14, which together with the considerations of Section 7.5 allow us to

concentrate on the consistency error. Certain smoothness properties will be

7.6. Order of Convergence Estimates 331

required for the classical solution u of the initial boundary value problem

(7.1), which in particular makes its evaluation possible at the grid points

xi ∈ „¦h at each instance of time t ∈ [0, T ] and also of various derivatives.

The vector representing the corresponding grid function (for a ¬xed order-

ing of the grid points) will be denoted by U (t), or for short by U n := U (tn )

for t = tn . The corresponding grid points depend on the boundary condi-

tion. For a pure Dirichlet problem, the grid points will be from „¦h , but if

Neumann or mixed boundary conditions appear, they are from the enlarged

set

„¦h := „¦h © („¦ ∪ “1 ∪ “2 ) .

˜ (7.106)

Then the error at the grid points and each time level is given by

en := U n ’ un for n = 0, . . . , N , (7.107)

h h

where un is the solution of the one-step-theta method according to (7.66).

h

The consistency error qh as a grid function on „¦h — {t1 , . . . , tN } or corres-

ˆ

pondingly a sequence of vectors q n in RM1 for n = 1, . . . , N is then de¬ned

ˆh

by

1

U n+1 ’ U n + ˜Ah U n+1

q n+1

ˆh :=

„

+ ˜Ah U n ’ q h ((n + ˜)„ ) (7.108)

for n = 0, . . . , N ’ 1. Then the error grid function obviously satis¬es

1 n+1

eh ’ en + ˜Ah en+1 + ˜Ah en for n = 0, . . . , N ’ 1 ,

q n+1

ˆh

=

h h

h

„

e0 = 0 (7.109)

h

(or nonvanishing initial data if the initial condition is not evaluated exactly

at the grid points). In the following we estimate the grid function qh in the

ˆ

discrete maximum norm

max{|(ˆ n )r | | r ∈ {1, . . . , M1 } , n ∈ {1, . . . , N }}

qh

ˆ := qh

∞

max{|ˆn |∞ | n ∈ {1, . . . , N }},

= qh (7.110)

i.e., pointwise in space and time. An alternative norm would be the discrete

L2 -norm, i.e.,

1/2 1/2

M1

N N

2

|(ˆ n )r | |ˆ n |2

hd

qh

ˆ := „ qh = „ q h 0,h , (7.111)

0,h

n=1 r=1 n=1

using the spatial discrete L2 -norm from (7.59), where the same notation is

employed. If for the sequence of underlying grid points considered there is

a constant C > 0 independent of the discretization parameter h such that

M1 = M1 (h) ¤ Ch’d , (7.112)

332 7. Discretization of Parabolic Problems

then obviously,

¤ (CT )1/2 qh

qh

ˆ ˆ ,

∞

0,h

so that the L2 -norm is weaker than the maximum norm. Condition (7.112)

is satis¬ed for such uniform grids, as considered in Section 1.2. A norm in

between is de¬ned by

:= max {|ˆ n |0,h | n = 1, . . . , N } ,

qh

ˆ qh (7.113)

∞,0,h

which is stronger than (7.111) and in the case of (7.112) weaker than the

maximum norm.

Analogously to Section 1.4, we denote U n amended by the eliminated

ˆn ˜n

boundary values U h ∈ RM2 by the vector U ∈ RM .

For simplicity we restrict attention, at the beginning, to the case of pure

Dirichlet data. Taking into account (7.98) and assuming that f ((n’1+˜)„ )

is derived from the continuous right-hand side by evaluation at the grid

points, we get

1 n+1 d

’ U n) ’

q n+1

ˆh = (U U (tn + ˜„ )

„ dt

˜ ˜ n+1 + ˜Ah U n ’ (LU )(tn + ˜„ )

˜˜

+ ˜Ah U

=: S 1 + S 2 , (7.114)

so that S 1 , consisting of the ¬rst two terms, is the consistency error for the

time discretization.

d

Here dt U and LU are the vectors representing the grid functions corre-

d

sponding to dt u and Lu, which requires the continuity of these functions

as in the notion of a classical solution. We make the following assumption:

The spatial discretization has the order of consistency ± measured in

· ∞ (according to (1.17)) if the solution of the stationary problem (7.6)

is in C p („¦) for some ± > 0 and p ∈ N.

For example, for the Dirichlet problem of the Poisson equation and the

¬ve-point stencil discretization on a rectangle, we have seen in Chapter 1

that ± = 2 is valid for p = 4. If we assume for u(·, t), u being the solution

of (7.1), that

the spatial derivatives up to order p exist continuously

and are bounded uniformly in t ∈ [0, T ] , (7.115)

then there exists a constant C > 0 such that

|(Ah U (t))i ’ (Lu(·, t))(xi )| ¤ Ch±

˜˜ (7.116)

for every grid point xi ∈ „¦h and t ∈ [0, T ].

In the case of Neumann or mixed boundary conditions, then some of the

equations will correspond to discretizations of these boundary conditions.

This discretization may be directly a discretization of (7.3) or (7.4) (typ-

ically, if one-sided di¬erence quotients are used) or a linear combination

7.6. Order of Convergence Estimates 333

of the discretizations of the di¬erential operator at xi ∈ „¦h and of the

˜

boundary di¬erential operator of (7.3) or (7.4) (to eliminate “arti¬cial”

grid points) (see Section 1.3).

Thus we have to take xi ∈ „¦h and interpret Lu in (7.116) as this modi¬ed

˜

di¬erential operator for xi ∈ “1 ∪ “2 just described to extend all the above

reasoning to the general case.

The estimation of the contribution S 2 on the basis of (7.116) is directly

possible for ˜ = 0 or ˜ = 1, but requires further smoothness for ˜ ∈ (0, 1).

We have

S2 = S3 + S4 ,

where

˜ ˜ n+1 ’ (LU )(tn+1 )) + ˜(Ah U n ’ (LU )(tn )) ,

˜˜

S3 := ˜(Ah U

:= ˜(LU )(tn+1 ) + ˜(LU )(tn ) ’ (LU )(tn + ˜„ ) .

S4

By Taylor expansion we conclude for a function v ∈ C 2 [0, T ] that

2

˜2 ˜

2

v (t1 ) + ˜ v (t2 )

˜v(tn+1 ) + ˜v(tn ) = v(tn + ˜„ ) + „ ˜ n n

2 2

for some t1 ∈ (tn , tn + ˜„ ), t2 ∈ (tn + ˜„, tn+1 ), so that

n n

|S 4 |∞ ¤ C„ 2 (7.117)

for some constant C > 0 independent of „ and h if for ˜ ∈ (0, 1) the

solution u of (7.1) satis¬es

‚2

‚

Lu ∈ C(QT ) .

Lu , (7.118)

‚t2

‚t

This is a quite severe regularity assumption, which often does not hold.

For S 3 we conclude directly from (7.116) that

|S 3 |∞ ¤ Ch± . (7.119)

1

and ˜ = 1 : If

To estimate S 1 we have to distinguish between ˜ = 2 2

‚2 ‚3

‚ 1

u ∈ C(QT ) and for ˜ = u ∈ C(QT ),

u, also (7.120)

‚t2 ‚t3

‚t 2

then Lemma 1.2 implies (for ˜ = 0, 1, 1 , for ˜ ∈ (0, 1) again with a Taylor

2

expansion)

|S 1 |∞ ¤ C„ β (7.121)

for some constant C, independent of „ and h, with β = 1 for ˜ = 1 and

2

β = 2 for ˜ = 1 .

2

Thus, under the additional regularity assumptions (7.115), (7.118),

(7.120), and if the spatial discretization has order of consistency ± in

334 7. Discretization of Parabolic Problems

the maximum norm, i.e., (7.116), then the one-step-theta method has the

following order of consistency:

¤ C(h± + „ β )

qh

ˆ (7.122)

∞

for some constant C, independent of „ and h, with β as in (7.121).

By using a weaker norm one might hope to achieve a higher order of

convergence. If this is, for example, the case for the spatial discretization,

e.g., by considering the discrete L2 -norm · 0,h instead of · ∞ , then

instead of (7.116) we have

˜˜

Ah U (t) ’ Lu(·, t) ¤ Ch± , (7.123)

0,h

where the terms in the norm denote the corresponding grid functions.

Then again under (weaker forms of) the additional regularity assump-

tions (7.115), (7.118), (7.120) and assuming (7.112), we have

¤ C(h± + „ β ) .

qh

ˆ (7.124)

0,h

By means of Theorem 7.26 we can conclude the ¬rst order of convergence

result:

Theorem 7.29 Consider the one-step-theta method and assume that the

spatial discretization matrix Ah has a basis of eigenvectors wi with eigen-

values »i ≥ 0, i = 1, . . . , M1 , orthogonal with respect to the scalar product

·, · h , de¬ned in (7.58). The spatial discretization has order of consistency

± in · 0,h for solutions in C p („¦). If „ is such that the method is sta-

ble according to (7.95), then for a su¬ciently smooth solution u of (7.1)

(e.g., (7.115), (7.118), (7.120)), and for a sequence of grid points satisfying

(7.112), the method converges in the norm · ∞,0,h with the order

O(h± + „ β ) ,

1

where β = 2 for ˜ = and β = 1 otherwise.

2

Proof: Due to Theorem 7.26 and (7.109) we have to estimate the

N

consistency error in a norm de¬ned by „ n=1 |ˆ n |0,h (i.e., a discrete L1 -

qh

2

L -norm), which is weaker than qh 0,h , in which the estimate has been

ˆ

2

veri¬ed in (7.124).

Again we see here a smoothing e¬ect in time: The consistency error has

to be controlled only in a discrete L1 -sense to gain a convergence result in

a discrete L∞ -sense.

If a consistency estimate is provided in · ∞ as in (7.122), a convergence

estimate still needs the corresponding stability. Instead of constructing a

vector as in Theorem 1.14 for the formulation (7.97), we will argue directly

with the help of the comparison principle (Theorem 7.28, 1)), which would

have been possible also in Section 1.4 (see Exercise 1.14).

7.6. Order of Convergence Estimates 335

Theorem 7.30 Consider the one-step-theta method and assume that the

spatial discretization matrix Ah satis¬es (1.32) (1), (2), (3) (i) and assume

its L∞ -stability by the existence of vectors wh ∈ RM1 and a constant C > 0

independent of h such that

Ah w h ≥ 1 |wh |∞ ¤ C .

and (7.125)

The spatial discretization has order of consistency ± in · ∞ for solutions

in C p („¦). If (7.100) is satis¬ed, then for a su¬ciently smooth solution u

of (7.1) (e.g., (7.115), (7.118), (7.120)) the method converges in the norm

· ∞ with the order

O(h± + „ β ) ,

1

where β = 2 for ˜ = and β = 1 otherwise.

2

Proof: From (7.122) we conclude that

’C(h± + „ β )1 ¤ q n ¤ C(h± + „ β )1

ˆ ˆ

ˆh for n = 1, . . . , N

ˆ

for some constant C independent of h and „.

Thus (7.109) implies

1 n+1

eh ’ en + ˜Ah en+1 + ˜Ah en ¤ ˆ

C(h± + „ β )1 ,

h h

h

„

e0 = 0.

h

ˆ

Setting wn := C(h± + „ β )wh with wh from (7.125), this constant sequence

h

of vectors satis¬es

1

wn+1 ’ w n + ˜Ah wn+1 + ˜Ah wn ≥ C(h± + „ β )1 .

ˆ

h h

h h

„

Therefore, the comparison principle (Theorem 7.28, (1)) implies

en ¤ w n = C(h± + „ β )w h

ˆ

h h

for n = 0, . . . , N, and analogously, we see that

’C(h± + „ β )wh ¤ en ,

ˆ h

so that

(en )j ¤ C(h± + „ β )(w h )j

ˆ (7.126)

h

for all n = 0, . . . , N and j = 1, . . . , M1 , and ¬nally,

|en |∞ ¤ C(h± + „ β )|w h |∞ ¤ C(h± + „ β )C

ˆ ˆ

h

2

with the constant C from (7.125).

Note that the pointwise estimate (7.126) is more precise, since it also

takes into account the shape of w h . In the example of the ¬ve-point stencil

with Dirichlet conditions on the rectangle (see the discussion around (1.43))

336 7. Discretization of Parabolic Problems

the error bound is smaller in the vicinity of the boundary (which is to be

expected due to the exactly ful¬lled boundary conditions).

Order of Convergence Estimates for the Finite Element Method

We now return to the one-step-theta method for the ¬nite element method

as introduced in (7.72). In particular, instead of considering grid functions

as for the ¬nite di¬erence method, the ¬nite element method allows us to

consider directly a function U n from the ¬nite-dimensional approximation

space Vh and thus from the underlying function space V .

In the following, an error analysis for the case ˜ ∈ [ 1 , 1] under the as-

2

sumption u ∈ C 2 ([0, T ], V ) will be given. In analogy with the decomposition

of the error in the semidiscrete situation, we write

u(tn ) ’ U n = u(tn ) ’ Rh u(tn ) + Rh u(tn ) ’ U n =: (tn ) + θn .

The ¬rst term of the right-hand side is the error of the elliptic projection at

the time tn , and for this term an estimate is already known. The following

identity is used to estimate the second member of the right-hand side, which

immediately results from the de¬nition of the elliptic projection:

1 n+1

’ θn ), vh + a(˜θn+1 + ˜θn , vh )

(θ

„ 0

1

((Rh u(tn+1 ) ’ Rh u(tn )), vh + a(˜Rh u(tn+1 ) + ˜Rh u(tn ), vh )

=

„ 0

1 n+1

’ ’ U n ), vh ’ a(˜U n+1 + ˜U n , vh )

(U

„ 0

1

(Rh u(tn+1 ) ’ Rh u(tn )), vh + a(˜u(tn+1 ) + ˜u(tn ), vh )

=

„ 0

’ bn+˜ (vh )

1

(Rh u(tn+1 ) ’ Rh u(tn )), vh ’ ˜u (tn+1 ) + ˜u (tn ), vh 0

=

„ 0

= wn , vh 0 ,

where

1

(Rh u(tn+1 ) ’ Rh u(tn )) ’ ˜u (tn+1 ) ’ ˜u (tn ) .

wn :=

„

Taking into consideration the inequality a(vh , vh ) ≥ 0 , the particular choice

of the test function as vh = ˜θn+1 + ˜θn yields

+ (1 ’ 2˜) θn , θn+1 ’ ˜ θn ¤ „ wn , ˜θn+1 + ˜θn 0 .

˜ θn+1 2 2

0

0 0

For ˜ ∈ [ 1 , 1] we have (1 ’ 2˜) ¤ 0, and hence

2

’ θn

θn+1 ˜ θn+1 + ˜ θn

0 0 0 0

+ (1 ’ 2˜) θn 0 θn+1 0 ’ ˜ θn

˜ θn+1 2 2

= 0 0

¤ + (1 ’ 2˜) θn , θn+1 0 ’ ˜ θn 2

˜ θn+1 2

0 0

¤ „ wn ˜ θn+1 + ˜ θn .

0 0 0

7.6. Order of Convergence Estimates 337

Dividing each side by the expression in the square brackets, we get

¤ θn

θn+1 + „ wn 0.

0 0

The recursive application of this inequality leads to

n

¤θ

n+1 0

wj

θ +„ . (7.127)

0 0 0

j=0

That is, it remains to estimate the terms wj 0 . A simple algebraic

manipulation yields

1 1

((Rh ’ I)u(tn+1 ) ’ (Rh ’ I)u(tn )) + (u(tn+1 ) ’ u(tn ))

wn :=

„ „

’ ˜u (tn+1 ) ’ ˜u (tn ) . (7.128)

Taylor expansion with integral remainder implies

tn+1

(tn+1 ’ s)u (s) ds

u(tn+1 ) = u(tn ) + u (tn )„ +

tn

and

tn

u(tn ) = u(tn+1 ) ’ u (tn+1 )„ + (tn ’ s)u (s) ds .

tn+1

Using the above relations we get the following useful representations of the

di¬erence quotient of u in tn :

tn+1

1 1

(u(tn+1 ) ’ u(tn )) (tn+1 ’ s)u (s) ds ,

= u (tn ) +

„ „ tn

tn+1

1 1

(u(tn+1 ) ’ u(tn )) (tn ’ s)u (s) ds .

= u (tn+1 ) +

„ „ tn

Multiplying the ¬rst equation by ˜ and the second one by ˜, the

summation of the results yields

1

(u(tn+1 ) ’ u(tn )) = ˜u (tn+1 ) + ˜u (tn )

„

1 tn+1

[˜tn + ˜tn+1 ’ s]u (s) ds .

+

„ tn

Since |˜tn + ˜tn+1 ’ s| ¤ „ , the second term in the decomposition (7.128)

of wn can be estimated as

tn+1

1

(u(tn+1 ) ’ u(tn )) ’ ˜u (tn+1 ) ’ ˜u (tn ) ¤ u (s) ds .

0

„ tn

0

To estimate the ¬rst term in (7.128), Taylor expansion with integral

remainder is applied to the function v(t) := (Rh ’ I)u(t). Then we have

tn+1

1 1

((Rh ’ I)u(tn+1 ) ’ (Rh ’ I)u(tn )) = [(Rh ’ I)u(s)] ds .

„ „ tn

338 7. Discretization of Parabolic Problems

With the assumption on u using the fact that the derivative and the elliptic

projection commute, we get

tn+1

1 1

((Rh ’ I)u(tn+1 ) ’ (Rh ’ I)u(tn )) ¤ (Rh ’ I)u (s) ds .

0

„ „ tn

0

With (7.127) and summing the estimates for wn we obtain the following

0

result:

Theorem 7.31 Let a be a V -elliptic, continuous bilinear form, u0h ∈ Vh ,

u0 ∈ V , ˜ ∈ [ 1 , 1]. If u ∈ C 2 ([0, T ], V ), then

2

u(tn ) ’ U n ¤ u0h ’ Rh u0 + (I ’ Rh )u(tn )

0 0 0

tn tn

(I ’ Rh )u (s)

+ ds + „ u (s) ds .

0 0

0 0

Remark 7.32 (i) Under stronger smoothness assumptions on u and by de-

tailed considerations it can also be shown that the Crank“Nicolson method

(˜ = 1 ) is of order 2 in „ .

2

(ii) Contrary to the semidiscrete situation (Theorem 7.12), the fully

discrete estimate does not re¬‚ect any exponential decay in time.

Utilizing the error estimate for the elliptic projection as in Section 7.2 (cf.

Theorem 7.12) and assuming u0 ∈ V © H 2 („¦), we have

tn

u(tn ) ’ U ¤ u0h ’ u0 2

n

+ Ch u0 + u(tn ) + u (s) ds

0 0 2 2 2

0

tn

+„ u (s) ds .

0

0

If, in addition, u0h ’ u0 ¤ Ch2 u0 2 , we obtain

0

u(tn ) ’ U n ¤ C(u)(h2 + „ ) ,

0

with C(u) > 0 depending on the solution u (and thus on u0 ) but not

depending on h and „ .

To conclude this section we give without proof a summary of error

estimates for all possible values of ˜:

±

C(u)(h2 + „ ) , if ˜ ∈ [ 1 , 1] ,

2

u(tn ) ’ U n 0 ¤ C(u)(h2 + „ 2 ) , if ˜ = 1 , (7.129)

2

if ˜ ∈ [0, 1] and „ ¤ ‘h ,

2 2

C(u)h ,

where ‘ > 0 is a constant upper bound of the step size relation „ /h2 .

The occurrence of such a restriction is not surprising, since similar

requirements have already appeared for the ¬nite di¬erence method.

We also mention that the above restriction to a constant step size „ is

only for simplicity of the notation. If a variable step size „n+1 is used (which

is typically determined by a step size control strategy), then the number „

in Theorem 7.31 is to be replaced by maxn=0,...,N ’1 „n .

7.6. Order of Convergence Estimates 339

Order of Convergence Estimates for the Finite Volume Method

We now consider the one-step-theta method for the ¬nite volume method

as introduced in (7.75).

The error analysis will run in a similar way as for the ¬nite element

method.

We write

u(tn ) ’ U n = u(tn ) ’ Rh u(tn ) + Rh u(tn ) ’ U n =: (tn ) + θn ,

where Rh is the auxiliary operator de¬ned in (7.63). So for the ¬rst term

of the right-hand side, an estimate is already known.

From the de¬nition (7.63) and (7.32), we immediately derive the

following identity:

1 n+1

’ θn ), vh + ah (˜θn+1 + ˜θn , vh )

(θ

„ 0,h

1

(Rh u(tn+1 ) ’ Rh u(tn )), vh

= + ah (˜Rh u(tn+1 ) + ˜Rh u(tn ), vh )

„ 0,h

1 n+1

’ ’ U n ), vh ’ ah (˜U n+1 + ˜U n , vh )

(U

„ 0,h

1

(Rh u(tn+1 ) ’ Rh u(tn )), vh

= + a(˜u(tn+1 ) + ˜u(tn ), vh )

„ 0,h

’ f n+˜ , vh 0,h

1

(Rh u(tn+1 ) ’ Rh u(tn )), vh ’ ˜u (tn+1 ) + ˜u (tn ), vh 0

=

„ 0,h

+ f n+˜ , vh 0 ’ f n+˜ , vh 0,h

1

(Rh u(tn+1 ) ’ Rh u(tn )), vh ’ ˜u (tn+1 ) + ˜u (tn ), vh 0,h

=

„ 0,h

+ ˜u (tn+1 ) + ˜u (tn ), vh 0,h ’ ˜u (tn+1 ) + ˜u (tn ), vh 0

’ f n+˜ , vh

+ f n+˜ , vh 0 0,h

n n

= w , vh + r (vh ) ,

0,h

where

1

(Rh u(tn+1 ) ’ Rh u(tn )) ’ ˜u (tn+1 ) ’ ˜u (tn )

wn :=

„

and

˜u (tn+1 ) + ˜u (tn ), vh 0,h ’ ˜u (tn+1 ) + ˜u (tn ), vh

rn (vh ) := 0

+ f n+˜ , vh 0 ’ f n+˜ , vh 0,h .

Under the assumptions of Theorem 6.15, we know that ah (vh , vh ) ≥ 0 for

all vh ∈ Vh . The particular choice of the test function as vh = vh := ˜

˜θn+1 + ˜θn yields, similarly to the ¬nite element case, for ˜ ∈ [ 1 , 1] the

2

340 7. Discretization of Parabolic Problems

estimate

’ θn

θn+1 ˜ θn+1 + ˜ θn

0,h 0,h 0,h 0,h

¤„ wn , vh

˜

+ rn (vh )

˜

0,h

rn (vh )

¤„ wn ˜

0,h + sup vh 0,h

vh 0,h

vh ∈Vh

rn (vh )

¤„ wn ˜ θn+1 + ˜ θn

0,h + sup .

0,h 0,h

vh 0,h

vh ∈Vh

Dividing each side by the expression in the square brackets, we get

rn (vh )

¤θ

n+1 n n

θ +„ w 0,h + sup .

0,h 0,h

vh ∈Vh vh 0,h

The recursive application of this inequality leads to

n n

rj (vh )

¤θ

n+1 0 j

θ 0,h + „ w 0,h + „ sup . (7.130)

0,h

vh 0,h

j=0 vh ∈Vh

j=0

The representation of wj obtained in the subsection on the ¬nite element

method yields the following estimate:

tj+1 tj+1

1

¤ (Rh ’ I)u (s)

j

w 0,h ds + u (s) ds .

0,h 0,h

„ tj tj

Furthermore, by Lemma 7.14, we have

|rj (vh )| ¤ Ch ˜|u (tj+1 )|1,∞ + ˜|u (tj )|1,∞ + |f j+˜ |1,∞ vh 0,h .

Using both estimates in (7.130), we obtain

θn+1 0,h

tn+1 tn+1

¤θ (Rh ’ I)u (s)

0

+C ds + „ u (s) ds

0,h 0,h 0,h

0 0