<<

. 10
( 16)



>>

e(k+1) = »ν cν zν = νν
2
ν:|»ν |¤ 1 ν:|»ν |¤ 1
2 2
2
« 1/2
1 1 (k)
c2 
¤ = e
ν 2
2 2
ν:|»ν |¤ 1
2

if the eigenvectors are chosen orthonormal with respect to the Euclidean
scalar product (compare (5.67)). For such an initial error and with ex-
act arithmetic the method would thus have a “small” contraction number
independent of the discretization.
For Jacobi™s method damped by ω = 1 this means that if the initial error
2
consists of functions of high frequency only (in the sense of an eigenvector
expansion only of eigenvectors with k or l distant to 1), then the above
considerations hold. But already due to rounding errors we will always
¬nd functions of low frequency in the error such that the above statement
of convergence indeed does not hold, but instead the smoothing property
for the damped Jacobi™s method is valid: A few steps only lead to a low
reduction of the error but smooth the error in the sense that the parts of
high frequency are reduced considerably.
240 5. Iterative Methods for Systems of Linear Equations

The very idea of the multigrid method lies in the approximative calcula-
tion of this remaining error on a coarse grid. The smooth error can still be
represented on the coarser grid and should be approximated there. Gener-
ally, the dimension of the problem is greatly reduced in this way. Since the
¬nite element discretizations are a central topic of this book, we develop
the idea of multigrid methods for such an example. But it will turn out
that the multigrid method can be used as well for both the ¬nite di¬erence
and the ¬nite volume methods. Multigrid methods have even been success-
fully used in areas other than the discretization of di¬erential equations.
Algebraic multigrid methods are generally applicable to systems of linear
equations (5.1) and generate by themselves an abstract analogy of a “grid
hierarchy” (see, for example, [65]).


5.5.2 Multigrid Method for Finite Element Discretizations
Let Tl = Th be a triangulation that originates from a coarse triangulation
T0 by l applications of a re¬nement strategy, for example the strategy of
Section 2.4.1. As we will see, it is not necessary that, for example, in two
space dimensions going from Tk to Tk+1 each triangle will be partitioned
into four triangles. Only the relation
Vk ‚ Vk+1 , k = 0, . . . , l ’ 1 ,
has to hold for ¬nite-dimensional approximation spaces V0 , V1 , . . . , Vl =
Vh generated by a ¬xed ansatz; i.e., the approximation spaces have to be
nested. This holds for all approximation spaces discussed in Section 3.3 if
Tk+1 is still a conforming triangulation and results from Tk by partitioning
of K ∈ Tk into a possibly varying number of elements of equal kind.
The nodes of Tk , which are the degrees of freedom of the discretization
(possibly multiple in a Hermite ansatz), are denoted by
ak , i = 1, . . . , Mk ,
i

and the corresponding basis functions of Vk are denoted by
•k , i = 1, . . . , Mk ,
i

with the index k = 0, . . . , l. For a quadratic ansatz on a triangle and Dirich-
let boundary conditions the ak are just the vertices and midpoints of the
i
edges in the interior of the domain. Let the underlying variational equa-
tion (2.21) be de¬ned by the bilinear form a and the linear form b on the
function space V . The system of equations to be solved is
Al xl = bl . (5.88)
In addition, we have to consider auxiliary problems
Ak xk = bk
5.5. Multigrid Method 241

for k = 0, . . . , l ’ 1. For the discretization matrix on each re¬nement level
we have, according to (2.34),
(Ak )ij = a •k , •k , i, j = 1, . . . , Mk , k = 0, . . . , l ,
j i

and for the right side of the problem to be solved
(bl )i = b •l , i = 1, . . . , Ml .
i

In Section 2.2, xl is denoted by ξ, and bl is denoted by q h .
First we discuss the ¬nite element discretization of a variational equa-
tion with symmetric bilinear form, so that in reference to Lemma 2.14 the
Galerkin method to be solved is equivalent to the Ritz method, i.e., to the
minimization of
1
Fl (xl ) := xT Al xl ’ bT xl .
2l l

Note that l indicates the discretization level and is not an index of a
component or an iteration step.
We distinguish between the function ul ∈ Vl and the representation
vector xl ∈ RMl , so that
Ml
xl,i •l .
ul = (5.89)
i
i=1

For a Lagrange ansatz we have
xl,i = ul al , i = 1, . . . , Ml ,
i

as illustrated by Figure 5.5.


2 1
ui 2
xi = 1
1 1.5



Figure 5.5. ui and xi .

Relation (5.89) de¬nes a linear bijective mapping
Pl : RMl ’ Vl . (5.90)
Thus for z l ∈ RMl (compare (2.35)),
1T 1
z l Al z l ’ bT z l = a (Pl z l , Pl z l ) ’ b(Pl z l ) = F (Pl z l ) ,
Fl (z l ) = l
2 2
where
1
a(u, u) ’ b(u) for u ∈ V
F (u) :=
2
242 5. Iterative Methods for Systems of Linear Equations


(k)
Let xl be the kth iterate to the solution of (5.88).
(1) Smoothing step: For ¬xed ν ∈ {1, 2, . . .} calculate
(k+1/2) (k)
= Slν xl .
xl
Let the corresponding function be:
(k+1/2) (k+1/2)
∈ Vl .
ul = Pl xl
(2) Coarse grid correction: Solve (exactly)
(k+1/2)
+ v ’ min
F ul (5.93)
varying v ∈ Vl’1 , with solution vl’1 . Then set
¯
= Pl’1 ul + Pl’1 vl’1 .
(k+1) (k+1/2) (k+1/2)
xl + vl’1 = xl
¯ ¯

Table 5.5. (k + 1)th step of the two-grid iteration.

is the energy functional for the variational equation.
If xl is an approximation of xl , then the error y l := xl ’ xl satis¬es the
error equation
Al y l = bl ’ Al xl . (5.91)
This equation is equivalent to the minimization problem
Fl (xl + y l ) = min Fl (xl + y)
y∈RMl
and therefore to
F (Pl xl + vl ) = min F (Pl xl + v) , (5.92)
v∈Vl

with vl = Pl y l .
If the error y l is “smooth” in the sense that it can be well approxi-
mated also in the lower-dimensional space Vl’1 , one can solve the error
equation (5.91) approximately as part of an iteration step by solving the
minimization problem (5.92) only on Vl’1 . The starting condition of a
“smooth” error will be ensured by the application of a ¬xed number of
steps of a smoothing iteration method. Let Sl denote the application of
such a smoothing operation, for example the damped Jacobi™s method
’1
Sl x = x ’ ωDl (Al x ’ bl )
with the diagonal matrix Dl corresponding to Al according to (5.18).
Thus we get the algorithm of the two-grid iteration, whose (k + 1)th step
is described in Table 5.5. Problem (5.93) from Table 5.5 is equivalent to
(compare with Lemma 2.3)
(k+1/2)
for all w ∈ Vl’1
a ul + v, w = b(w) (5.94)
5.5. Multigrid Method 243

(1) A priori smoothing: Perform ν1 smoothing steps:
(k+1/3) (k)
= Slν1 xl
xl ,
where ν1 ∈ {1, 2, . . .} is ¬xed. Let the corresponding function be
(k+1/3) (k+1/3)
ul := Pl xl .
(2) Coarse grid correction: Solve on Vl’1 the Galerkin discretization
a(¯l’1 , w) = ˜ for all w ∈ Vl’1
v b(w) (5.95)
with the bilinear form a and the linear form
(k+1/3)
˜
b(w) := b(w) ’ a ul ,w

(a) for l = 1 exactly,
(b) for l > 1 by µ steps of a multigrid iteration on level l ’ 1 for a
and ˜ and for the start approximation 0.
b
+ Pl’1 vl’1 .
(k+2/3) (k+1/3)
Set xl = xl ¯
(3) A posteriori smoothing: Perform ν2 smoothing steps
(k+1) (k+2/3)
= Slν2 xl
xl ,
with ν2 ∈ {1, 2, . . .} ¬xed.

Table 5.6. (k + 1)th step of the multigrid iteration on level l for bilinear form a
and linear form b.

and thus again to the Galerkin discretization of a variational equation with
Vl’1 instead of V , with the same bilinear form and with a linear form
de¬ned by
(k+1/2)
w ’ b(w) ’ a ul for w ∈ Vl’1 .
,w

Hence we can ignore the assumption of symmetry for the bilinear form a
and ¬nd the approximative solution of the error equation (5.91) on grid
level l ’ 1 by solving the variational equation (5.94). The equivalent system
of equations will be derived in the following. On the one hand, this prob-
lem has a lower dimension than the original problem, but it also must be
solved for each iteration. This suggests the following recursive procedure:
If we have more than two grid levels, we again approximate this variational
equation by µ multigrid iterations; in the same way we treat the hereby
created Galerkin discretization on level l ’ 2 until level 0 is reached, where
we solve exactly. Furthermore, to conclude each iteration step smoothing
steps should be performed. This leads to the algorithm of the multigrid
iteration. The (k + 1)th step of the multigrid iteration on level l for the
(k)
bilinear form a, linear form b, and starting iteration xl is described in
Table 5.6.
244 5. Iterative Methods for Systems of Linear Equations

In general, ν1 = ν2 is used. In a convergence analysis it turns out that
only the sum of smoothing steps is important. Despite the recursive de¬-
nition of a multigrid iteration we have here a ¬nite method, because the
level 0 is reached after at most l recursions, where the auxiliary problem
will be solved exactly. For µ usually only the values µ = 1 or µ = 2 are
used. The terms V-cycle for µ = 1 and W-cycle for µ = 2 are commonly
used, because for an iteration, the sequence of levels on which operations
are executed have the shape of these letters (see Figure 5.6).

for l = 2 :

Level
µ=1 µ=2
2 o o o o

1 o o o o o

0 o o o



for l = 3 :

Level

3 o o o o

2 o o o o o

1 o o o o o o o o

0 o o o o o
Figure 5.6. Grid levels for the V-cycle (µ = 1) and the W-cycle (µ = 2).

The problems in (5.94) and (5.95) (see Table 5.6) have the form
for all w ∈ Vl’1 ,
a (u + v, w) = b(w) (5.96)
where v ∈ Vl’1 is unknown and u ∈ Vl is known. An equivalent system of
equations arises by inserting the basis functions •l’1 , j = 1, . . . , Ml’1 ,
j
for w and an appropriate representation for v. If we again take the
representation with respect to •l’1 , we get as in (2.34)
j
’1
Al’1 Pl’1 v = dl’1 . (5.97)
Here the residual dk ∈ RMk of u on the di¬erent levels k = 0, . . . , l is
de¬ned by
dk,i := b •k ’ a u, •k , i = 1, . . . , Mk .
i i
5.5. Multigrid Method 245

We now develop an alternative representation for (5.97) and the coarse
grid correction for possible generalizations beyond the Galerkin approxima-
tions. Therefore, let R ∈ RMl’1 ,Ml be the matrix that arises through the
unique representation of the basis functions •l’1 with respect to the basis
j
l
•i , which means the elements rji of R are determined by the equations
Ml
•l’1 rji •l ,
= j = 1, . . . , Ml’1 .
i
j
i=1

Then (5.96) is equivalent to
a(v, w) = b(w) ’ a(u, w) for all w ∈ Vl’1
« 
Ml’1
” a •l’1 , •l’1  = b •l’1 ’ a u, •l’1 , j = 1, . . . , Ml’1
’1
Pl’1 v s j j j
s
s=1
Ml’1 Ml Ml Ml
’1
” rji b •l ’ a u, •l
rst •l rji •l
Pl’1 v s a , =
t i i i
s=1 t=1 i=1 i=1
Ml’1 Ml
’1
” rji a •l , •l rst Pl’1 v = (Rdl )j , j = 1, . . . , Ml’1 .
t i s
s=1 i,t=1

Hence the system of equations has the form
’1
RAl RT Pl’1 v = Rdl . (5.98)
The matrix R is easy to calculate for a node-based basis •l satisfying
i
•i aj = δij , since in this case we have for v ∈ Vl ,
l l

Ml
v al •l ,
v= i i
i=1

and therefore in particular,
Ml
•l’1 •l’1 al •l
= i i
j j
i=1

and thus
rji = •l’1 al .
i
j

For the linear ansatz in one space dimension with Dirichlet boundary
1
conditions (i.e., with V = H0 (a, b) as basic space) this means that
«1 
11
2 2
¬ ·
1
11
¬ ·
R=¬ ·.
2 2
(5.99)
¬ ·
..
 
.
1 1
1
2 2
246 5. Iterative Methods for Systems of Linear Equations

The representation (5.98) can also be interpreted in this way:
Due to Vl’1 ‚ Vl the identify de¬nes a natural prolongation from Vl’1 to
Vl , which means that
p : Vl’1 ’ Vl , v ’ v ,
˜
as illustrated by Figure 5.7.



1 1
1 1
2 2




Figure 5.7. Prolongation.

This prolongation corresponds to a prolongation p from RMl’1 to RMl ,
the canonical prolongation, through the transition to the representation
vectors (5.90). It is given by
p := Pl’1 Pl’1 , (5.100)
since for xl’1 ∈ RMl’1 , p can be composed as follows:
p
˜
xl’1 ’ Pl’1 xl’1 ’ Pl’1 xl’1 ’ Pl’1 Pl’1 xl’1 .
Obviously, p is linear and can be identi¬ed with its matrix representation
in RMl ,Ml’1 . Then
p = RT (5.101)
holds, because
Ml’1 Ml Ml’1
yj •l’1 yj rji •l ,
Pl’1 y = = i
j
j=1 i=1 j=1

i.e., RT y = Pl’1 (Pl’1 y) for any y ∈ RMl’1 .
(l)
In the following RMl will be endowed with a scalar product ·, · , which
is an Euclidean scalar product scaled by a factor Sl ,
Ml
(l)
xl , y l := Sl xl,i yl,i . (5.102)
i=1

The scaling factor is to be chosen such that for the induced norm · and
l
the L2 („¦)-norm on Vl ,
¤ xl ¤ C2 Pl xl
C1 Pl xl (5.103)
0 l 0

for x ∈ RMl , l = 0, 1, . . ., with constants C1 , C2 independent of l: If the
triangulations are members of a regular and quasi-uniform family Th (see
5.5. Multigrid Method 247

De¬nition 3.28), then in d space dimensions one can choose Sl = hd , with
l
hl being the maximal diameter of K ∈ Tl (see Theorem 3.43).
Let r : RMl ’ RMl’1 be de¬ned by
r = p— , (5.104)
with the adjoint p— de¬ned with respect to the scalar products ·, · (l’1)

and ·, · (l) ; that is,
(l’1) (l’1) (l)
= p— xl , y l’1
r xl , y l’1 = xl , p y l’1 .
If p is the canonical prolongation, then r is called the canonical restriction.
For the representation matrices,
Sl’1
r = pT = R . (5.105)
Sl
In example (5.102) for d = 2 with hl = hl’1 /2 we have Sl’1 /Sl = 1/4. Due
to Pl p = Pl’1 , the canonical restriction of RMl on RMl’1 satis¬es
rRl = Rl’1 ,
where Rl : Vl ’ RMl is de¬ned as the adjoint of Pl ,
(l)
for all xl ∈ RMl , vl ∈ Vl ,
Pl xl , vl = xl , Rl vl
0

because for any y l’1 ∈ RMl’1 and for vl’1 ∈ Vl’1 ‚ Vl ,
(l’1) (l)
rRl vl’1 , y l’1 = Rl vl’1 , pyl’1 = vl’1 , Pl pyl’1 0
(l’1)
= vl’1 , Pl’1 y l’1 = Rl’1 vl’1 , y l’1 .
0

Using (5.105) we see the equivalence of equation (5.98) to
(rAl p)y l’1 = rdl . (5.106)
˜ ˜
Setting v := Pl’1 y l’1 for a perhaps only approximative solution y l’1 of
’1
(5.106), the coarse grid correction will be ¬nished by addition of Pl v. Due
to
Pl’1 v = Pl’1 Pl’1 Pl’1 v = p Pl’1 v ,
’1 ’1


the coarse grid correction is
(k+2/3) (k+1/3)
xl = xl + p(˜ l’1 ) .
y
The above-mentioned facts suggest the following structure of a general
multigrid method: For discretizations de¬ning a hierarchy of discrete
problems,
Al xl = bl ,
one needs prolongations
p : RMk’1 ’ RMk
248 5. Iterative Methods for Systems of Linear Equations

and restrictions
r : RMk ’ RMk’1
˜
for k = 1, . . . , l and the matrices Ak’1 for the error equations. The coarse
grid correction steps (5.93) and (5.95) hence take the following form:
Solve (with µ steps of the multigrid method)
(k+1/3)
Al’1 y l’1 = r bl ’ Al xl
˜

and set
(k+2/3) (k+1/3)
xl = xl + py l’1 .
The above choice
˜
Al’1 = rAl p
is called the Galerkin product. For Galerkin approximations this coincides
with the discretization matrix of the same type on the grid of level l ’ 1 due
to (5.97). This is also a common choice for other discretizations and then
an alternative to the Galerkin product. In view of the choice of p and r we
should observe the validity of (5.104). An interpolational de¬nition of the
prolongation on the basis of (¬nite element) basis functions as for example
(5.101) (see also example (5.99)) is also common in other discretizations. In
more di¬cult problems, as for example those with (dominant) convection
in addition to di¬usive transport processes, nonsymmetric problems arise
with a small constant of V -ellipticity. Here the use of matrix-dependent,
that means Al -dependent, prolongations and restrictions is recommended.


5.5.3 E¬ort and Convergence Behaviour
In order to judge the e¬ciency of a multigrid method the number of opera-
tions per iteration and the number of iterations (required to reach an error
level µ, see (5.4)) has to be estimated. Due to the recursive structure, the
¬rst number is not immediately clear. The aim is to have only the optimal
amount of O(Ml ) operations for sparse matrices. For this the dimensions
of the auxiliary problems have to decrease su¬ciently. This is expressed by
the following:
There exists a constant C > 1 such that
Ml’1 ¤ Ml /C for l ∈ N . (5.107)
Hence we assume an in¬nite hierarchy of problems and/or grids, which
also corresponds to the asymptotic point of view of a discretization from
Section 3.4. Relation (5.107) is thus a condition for a re¬nement strategy.
For the model problem of the Friedrichs“Keller triangulation of a rectangle
(see Figure 2.9) in the case of a regular “red” re¬nement we have hl =
hl’1 /2. Thus C = 4, and for analogous constructions in d space dimensions
5.5. Multigrid Method 249

C = 2d . The matrices that appear should be sparse, so that for level l the
following holds:
smoothing step = CS Ml operations,
error calculation and restrictions = CD Ml operations,
prolongation and correction = CC Ml operations.
Then we can prove the following (see [16, p. 326]):
If the number µ of multigrid steps in the recursion satis¬es
µ<C, (5.108)
then the number of operations for an iteration step for a problem on level
l can be estimated by
C(ν)Ml . (5.109)
Here ν is the number of a priori and a posteriori smoothing steps and
νCS + CD + CS
+ O (µ/C)l .
C(ν) =
1 ’ µ/C
The requirement (5.108) will be satis¬ed in general through the restriction
to µ = 1, µ = 2. Analogously, the memory requirement is O(Ml ), since
l
C
Mk ¤ Ml .
C ’1
k=0

Whether this larger e¬ort (of equal complexity) in comparison to other
methods discussed is justi¬ed will be decided by the rate of convergence.
The multigrid method is a linear stationary method. The iteration matrix
MlT GM of the two-grid method results from
(k+1/2) (k)
= Slν xl
xl ,
+ p A’1 r bl ’ Al xl
(k+1) (k+1/2) (k+1/2)
xl = xl l’1

to
MlT GM = (I ’ pA’1 rAl )Slν . (5.110)
l’1

Also, the consistency of the method follows immediately if the smoothing
iteration is consistent.
The analysis of the convergence of the multigrid method can be reduced
to the analysis of the two-grid method, since the iteration matrix is a
modi¬cation of MlT GM (see [16, p. 328]). For a large class of a priori and a
posteriori smoothing operators as well as of restrictions and prolongations
it can be shown (see [16, p. 347]) that there exists a constant ∈ (0, 1)
independent of the discretization parameter hl such that (MT GM ) ¤ .
Combined with (5.109) this shows that multigrid methods are optimal in
their complexity. This also shows their potential superiority compared with
all other methods described.
250 5. Iterative Methods for Systems of Linear Equations

In the following we will only indicate the schematic procedure to prove
this assertion. It is su¬cient to prove the following two properties, where
the spectral norm is used as the matrix norm, that is, the matrix norm
that is induced by the Euclidean vector norm.
(1) Smoothing property:
CS
Al Slν ¤
There exists CS > 0 : Al .
ν
(2) Approximation property:
A’1 ’ pA’1 r ¤ CA Al ’1
There exists CA > 0 : . (5.111)
l l’1

Due to
MT GM = A’1 ’ pA’1 r Al Slν ,
l l’1

we can conclude that
CS CA
MT GM ¤ A’1 ’ pA’1 r Al Slν ¤ ,
l l’1
ν
which means that for su¬ciently large ν,
MT GM ¤ <1
with independent of l.
The smoothing property is of an algebraic nature, but for the proof of
the approximation property we will use ” at least indirectly ” the original
variational formulation of the boundary value problem and the correspond-
ing error estimate. Therefore, we discuss only the smoothing property for,
as an example, the relaxed Richardson method for a symmetric positive
de¬nite matrix Al , i.e.,
1
Sl = Il ’ ωAl ω∈
with 0, .
»max (Al )

Let {z i }Ml be an orthonormal basis of eigenvectors of Al . For any initial
i=1
Ml
vector x(0) represented in this basis as x(0) = i=1 ci z i it follows that
(compare (5.68))
Ml Ml
2 ’2
’ (»i ω)2 (1 ’ »i ω)2ν c2
Al Slν x(0) »2 (1 »i ω)2ν c2
= =ω
i i i
i=1 i=1
2 Ml
’2
¤ω max ξ(1 ’ ξ) ν
c2 .
i
ξ∈[0,1]
i=1

The function ξ ’ ξ(1 ’ ξ)ν has its maximum at ξmax = (ν + 1)’1 ; thus
ν ν+1
1 1 1 ν 1
ξmax (1 ’ ξmax ) = 1’ ¤
ν
= .
ν +1 ν +1 ν ν+1 eν
5.6. Nested Iterations 251

Hence
1
Al Slν x(0) ¤ x(0) ,
ωeν
which implies
1
Al Slν ¤ .
ωeν
Since the inclusion ω ∈ (0, 1/»max(Al )] can be written in the form ω =
σ/ Al with σ ∈ (0, 1], we have CS = 1/(σe).
The approximation property can be motivated in the following way. The
¬ne grid solution xl of Al xl = dl is replaced in the coarse grid correction
by pxl’1 from Al’1 xl’1 = dl’1 := rdl . Therefore, pxl’1 ≈ A’1 dl should
l
hold. The formulation (5.111) thus is just a quantitative version of this
requirement. Since in the symmetric case Al ’1 is simply the reciprocal
value of the largest eigenvalue, (3.140) in Theorem 3.45 establishes the
relation to the statements of convergence in Section 3.4. For a more exact
analysis of convergence and a more extensive description of this topic we
refer to the cited literature (see also [17]).


Exercises
5.12 Determine the prolongation and restriction according to (5.101) and
(5.104) for the case of a linear ansatz on a Friedrichs“Keller triangulation.

5.13 Prove the consistency of the two-grid method (5.110) in the case of
the consistent smoothing property.


5.6 Nested Iterations
As in Section 5.5 we assume that besides the system of equations
Al xl = bl
with Ml unknowns, there are given analogous low-dimensional systems of
equations
k = 0, . . . , l ’ 1 ,
Ak xk = bk , (5.112)
with Mk unknowns, where M0 < M1 < · · · < Ml . Let all systems of
equations be an approximation of the same continuous problem such that
an error estimate of the type
u ’ Pl xl ¤ CA h±
l

holds, with Pl according to (5.90) and ± > 0. Here · is a norm on the
basic space V , and the constant CA generally depends on the solution u
252 5. Iterative Methods for Systems of Linear Equations

of the continuous problem. The discretization parameter hl determines the
dimension Ml : In the simplest case of a uniform re¬nement, hd ∼ 1/Ml
l
holds in d space dimensions. One may also expect that for the discrete
solution,
pxk’1 ’ xk ¤ C1 CA h± , k = 1, . . . , l ,
k k

holds with a constant C1 > 0. Here · k is a norm on RMk , and the
mapping p = pk’1,k : RMk’1 ’ RMk is a prolongation, for example the
canonical prolongation introduced in Section 5.5. In this case the estimate
can be rigorously proven with the de¬nition of the canonical prolongation
’1
p = Pk Pk’1 :
’1
pxk’1 ’ xk Pk Pk’1 xk’1 ’ Pk xk
=
k k
’1
¤ Pk’1 xk’1 ’ Pk xk
Pk L[V ,RMk ]
k
’1
¤ ¤ C1 CA h±
CA h± + CA h±
Pk L[V ,RMk ] k k’1 k
k

with
±
hj’1
Pj’1 L[Vj ,RMj ]
C1 = max 1+ .
hj
j=1,...,l

Let the system of equations be solved with an iterative method given by
the ¬xed-point mapping ¦k , k = 0, . . . , l, which means that xk according
to (5.112) satis¬es xk = ¦k (xk , bk ). Then it is su¬cient to determine an
˜
iterate xl with an accuracy
xl ’ xl l ¤ CA h±
˜
˜ (5.113)
l
˜
with CA := CA / Pl L[RMl ,V ] , because then we also have
Pl xl ’ Pl xl ¤ CA h± .
˜ l

If one does not have a good initial iterate from the concrete context, the
algorithm of nested iterations explained in Table 5.7 can be used. It is
indeed a ¬nite process.
The question is how to choose the iteration numbers mk such that (5.113)
¬nally holds, and whether the arising overall e¬ort is acceptable. An answer
to this question is provided by the following theorem:
Theorem 5.19 Let the iterative method ¦k have the contraction number
k with respect to · k . Assume that there exist constants C2 , C3 > 0 such
that
¤ C2 ,
p L[RMk’1 ,RMk ]

¤ C3 hk ,
hk’1
for all k = 1, . . . , l. If the iteration numbers mk for the nested iterations
are chosen in such a way that
¤ 1/(C2 C3 + C1 Pl ) ,
mk ±
(5.114)
k
5.6. Nested Iterations 253


Choose mk , k = 1, . . . , l.
˜
Let x0 be an approximation of x0 ,
for example x0 = x0 = A’1 b0 .
˜ 0
For k = 1, . . . , l:
(0)
˜ ˜
xk := p xk’1 .
Perform mk iterations:
(i) (i’1)
˜ ˜
xk := ¦k xk , bk , i = 1, . . . , mk .
(mk )
˜ ˜
Set xk := xk .

Table 5.7. Nested Iteration.


then

xk ’ xk ¤ CA h± ,
˜k
˜ k

for all k = 1, . . . , l, provided that this estimate holds for k = 0.

Proof: The proof is given by induction on k. Assume that the assertion
is true for k ’ 1. This induces

xk ’ xk ¤ p˜ k’1 ’ xk
mk
˜ x
k k k
¤ ( p(˜ k’1 ’ xk’1 ) + pxk’1 ’ xk
mk
x k)
k
k

¤ ˜
mk
C2 CA h± + C1 CA h±
k’1 k
k

¤ ˜k
mk
(C2 C3 + C1 Pl ) CA h± .
±
k

2
Theorem 5.19 allows the calculation of the necessary number of iterations
’1
for the inner iteration from the norms p L[RMk’1 ,RMk ] , Pk L[Vk ,RMk ] and
the constants hhk for k = 1, . . . , l, as well as the order of convergence ±
k’1


of the discretization.
In order to estimate the necessary e¬ort according to (5.114) more ex-
actly, the dependence of k of k must be known. In the following we consider
only the situation, known as the multigrid method, of a method of optimal
complexity

¤ < 1.
k

Here, in contrast to other methods, the number of iterations can be cho-
sen constant (mk = m for all k = 1, . . . , l). If, furthermore, the estimate
(5.107) holds with the constant C, then analogously to the consideration
in Section 5.5 the total number of operations for the nested iteration can
254 5. Iterative Methods for Systems of Linear Equations

be estimated by
C
m CMl .
C ’1
Here CMk is the number of operations for an iteration with the iteration
method ¦k .
In the model problem of the Friedrichs“Keller triangulation with uniform
re¬nement we have C/(C ’ 1) = 4/3 and C3 = 2. For · = · 0 as basic
norm, ± = 2 is a typical case according to Theorem 3.37. The existence of
the constant C2 will hereby ¬nally be ensured consistently by the condition
(5.103), observing (5.100). Assuming also that the constants C1 , C2 , Pl
are “small” and the iteration method has a “small” contraction number ,
only a small number of iterations m is necessary, in the ideal case m = 1.
At least in this situation we can count on only a small increase of the
necessary e¬ort through the process of nested iterations, which provides an
˜
“appropriate” approximation xk on all levels k of discretization.
Finally, it is to be observed that the sequence of the discrete problems
has to be de¬ned only during the process of the nested iteration. This o¬ers
the possibility to combine it with a posteriori error estimators as discussed
in Section 4.2, in order to develop a grid Tk+1 on which the discrete problem
of level k + 1 is determined, on the basis of xk as a re¬nement of Tk .
˜
6
The Finite Volume Method




Finite volume methods are widely applied when di¬erential equations in
divergence form (cf. Section 0.5) or di¬erential equations involving such
di¬erential expressions (for example, parabolic di¬erential equations) are to
be solved numerically. In the class of second-order linear elliptic di¬erential
equations, expressions of the form
Lu := ’∇ · (K ∇u ’ c u) + r u = f (6.1)
are typical (cf. (0.33)), where
K : „¦ ’ Rd,d , c : „¦ ’ Rd , r, f : „¦ ’ R .
The corresponding “parabolic version” is
‚u
+ Lu = f
‚t
and will be treated in Chapter 7.
First-order partial di¬erential equations such as the classical conservation
laws
∇ · q(u) = 0 ,
where q : R ’ Rd is a nonlinear vector ¬eld depending on u, or higher-order
partial di¬erential equations (such as the biharmonic equation (3.36)), or
even systems of partial di¬erential equations can be successfully discretized
by the ¬nite volume method.
In correspondence to the comparatively large class of problems that can
be treated by the ¬nite volume method, there are rather di¬erent sources
256 6. Finite Volume Method

1960 Forsythe and Wasow computation of neutron di¬usion
1961 Marˇuk
c computation of nuclear reactors
1971 McDonald ¬‚uid mechanics
1972 MacCormack and Paullay ¬‚uid mechanics
1973 Rizzi and Inouye ¬‚uid mechanics in 3D
1977 Samarski integro-interpolation method,
balance method
.
.
.
1979 Jameson ¬nite volume method
1984 Heinrich integro-balance method,
generalized ¬nite di¬erence method
.
.
.
1987 Bank and Rose box method
.
.
.
Table 6.1. Some sources of the ¬nite volume method.

originating mainly from practical applications. Some of these sources are
listed in Table 6.1. In contrast to ¬nite di¬erence or ¬nite element methods,
the theoretical understanding of the ¬nite volume method remained at an
early stage for a long time; only in recent years has essential progress been
noted.
The ¬nite volume method can be viewed as a discretization method of
its own right. It includes ideas from both ¬nite di¬erence and ¬nite element
methods. So in the literature approaches can be found that interpret it as
a “generalized ¬nite di¬erence method” or rather as a variant of the ¬nite
element method. In this chapter, we will consider only equations of the
type (6.1).



6.1 The Basic Idea of the Finite Volume Method
Now we will describe the fundamental steps in the derivation of the ¬nite
volume method. For simplicity, we restrict ourselves to the case d = 2 and
r = 0. Furthermore, we set q(u) := ’K ∇u + c u. Then equation (6.1)
becomes
∇ · q(u) = f . (6.2)
In order to obtain a ¬nite volume discretization, the domain „¦ will be
subdivided into M subdomains „¦i such that the collection of all those
subdomains forms a partition of „¦, that is:
(1) each „¦i is an open, simply connected, and polygonally bounded set
without slits,
6.1. Basics 257

(2) „¦i © „¦j = … (i = j),
(3) ∪M „¦i = „¦ .
i=1

These subdomains „¦i are called control volumes or control domains.
Without going into more detail we mention that there also exist ¬nite
volume methods with a well-de¬ned overlapping of the control volumes
(that is, condition 2 is violated).
The next step, which is in common with all ¬nite volume methods, con-
sists in integrating equation (6.2) over each control volume „¦i . After that,
Gauss™s divergence theorem is applied:

ν · q(u) dσ = i ∈ {1, . . . , M } ,
f dx ,
‚„¦i „¦i

where ν denotes the outer unit normal to ‚„¦i . By the ¬rst condition of the
partition, the boundary ‚„¦i is formed by straight-line segments “ij (j =
1, . . . , ni ), along which the normal ν|“ij =: νij is constant (see Figure 6.1).
So the line integral can be decomposed into a sum of line integrals from
which the following equation results:
ni
νij · q(u) dσ = f dx . (6.3)
“ij „¦i
j=1




νi 2 νi 1



„¦i
νi 3
νi 5


νi 4


Figure 6.1. A control volume.

Now the integrals occurring in (6.3) have to be approximated. This can
be done in very di¬erent ways, and so di¬erent ¬nal discretizations are
obtained.
In general, ¬nite volume methods can be distinguished by the following
criteria:
(1) the geometric shape of the control volumes „¦i ,
(2) the position of the unknowns (“problem variables”) with respect to
the control volumes,
258 6. Finite Volume Method

(3) the approximation of the boundary (line (d = 2) or surface (d = 3))
integrals.
Especially the second criterion divides the ¬nite volume methods into two
large classes: the cell-centred and the cell-vertex ¬nite volume methods. In
the cell-centred methods, the unknowns are associated with the control
volumes (for example, any control volume corresponds to a function value
at some interior point (e.g., at the barycentre)). In the cell-vertex methods,
the unknowns are located at the vertices of the control volumes. Sometimes,
instead of the ¬rst-mentioned class a subdivision into two classes, the so-
called cell-centred and node-centred methods, is considered. The di¬erence
is whether the problem variables are assigned to the control volumes or,
given the problem variables, associated control volumes are de¬ned.
Example 6.1 Consider the homogeneous Dirichlet problem for the Pois-
son equation on the unit square:
’∆u = in „¦ = (0, 1)2 ,
f
u= 0 on ‚„¦ .



Problem variables:
aj2
Function values at the nodes ai
of a square grid with mesh width
aj3 ai aj1
h>0
aj4 Control volumes:
„¦i := {x ∈ „¦ : |x ’ ai |∞ < h }
2




Figure 6.2. Problem variables and control volumes in a cell-centred ¬nite volume
method.

For an inner control volume „¦i (i.e., ai ∈ „¦), equation (6.3) takes the
form
4
’ νijk · ∇u dσ = f dx ,
“ijk „¦i
k=1

where “ijk := ‚„¦i © ‚„¦jk . A closer look at the directional derivatives shows
that
νij1 · ∇u = ‚1 u , νij2 · ∇u = ‚2 u ,
νij3 · ∇u = ’‚1 u , νij4 · ∇u = ’‚2 u .
i.e. they are just partial derivatives with respect to the ¬rst or the second
variable on the corresponding parts of the boundary.
6.1. Basics 259

Approximating the integrals on “ijk by means of the midpoint rule and
replacing the derivatives by di¬erence quotients, we have

4 4
ai + ajk
’ νijk · ∇u dσ ≈ ’ νijk · ∇u h
2
“ijk
k=1 k=1
u(aj1 )’u(ai ) u(aj2 )’u(ai ) u(ai )’u(aj3 ) u(ai )’u(aj4 )
≈’ ’ ’
+ h
h h h h
4
= 4 u(ai ) ’ u(ajk ) .
k=1


Thus, we obtain exactly the expression that results from the application of
a ¬nite element method with continuous, piecewise linear ansatz and test
functions on a Friedrichs“Keller triangulation (cf. Figure 2.9).
Furthermore, if we approximate the integral „¦i f dx by f (ai )h2 , we see
that this term coincides with the trapeziodal rule applied to the right-hand
side of the mentioned ¬nite element formulation (cf. Lemma 2.13).
Actually, it is no accident that both discretization methods lead to the
same algebraic system. Later on we will prove a more general result to
con¬rm the above observation.
The boundary control volumes are treated as follows:
If ai ∈ ‚„¦, then parts of the boundary ‚„¦i lie on ‚„¦. At these nodes,
the Dirichlet boundary conditions already prescribe values of the unknown
function, and so there is no need to include the boundary control volumes
into the balance equations (6.3).
A detailed description for the case of ¬‚ux boundary conditions will be
given later, in Section 6.2.4; see (6.23).

Example 6.2 We consider the same boundary value problem as in
Example 6.1.


Problem variables:
Function values at the nodes ai
of a square grid with mesh width
h>0
„¦i
Control volumes:
Subsquares of the grid



Figure 6.3. Problem variables and control volumes in a cell-vertex ¬nite volume
method.
260 6. Finite Volume Method

In the interior of „¦, the resulting discretization yields a 12-point stencil
(in the terminology of ¬nite di¬erence methods).
Remark 6.3 In the ¬nite volume discretization of systems of partial dif-
ferential equations (resulting from ¬‚uid mechanics, for example), both
methods are used simultaneously for di¬erent variables; see Figure 6.4.


. . . ..
O O O O O




. . . . : problem variable of type 1
O O O O O




. . . . : problem variable of type 2
O
O O O O O




. . . .
O O O O O



O O O O O

Figure 6.4. Finite volume discretization of systems of partial di¬erential
equations.



Assets and Drawbacks of the Finite Volume Method
Assets:
• Flexibility with respect to the geometry of the domain „¦ (as in ¬nite
element methods).
• Admissibility of unstructured grids (as in ¬nite element methods,
important for adaptive methods).
• Simple assembling.
• Conservation of certain laws valid for the continuous problem
(for example, conservation laws or maximum principles). This
property is important in the numerical solution of di¬erential equa-
tions with discontinuous coe¬cients or of convection-dominated
di¬usion-convection equations (see Section 6.2.4).
• Easy linearization of nonlinear problems (simpler than in ¬nite
element methods (Newton™s method)).
• Simple discretization of boundary conditions (as in ¬nite element
methods, especially a “natural” treatment of Neumann or mixed
boundary conditions).
• In principle, no restriction of the spatial dimension d of the domain
„¦.
6.1. Basics 261

Drawbacks:

• Smaller ¬eld of applications in comparison with ¬nite element or ¬nite
di¬erence methods.

• Di¬culties in the design of higher order methods (no so-called p-
version available as in the ¬nite element method).

• In higher spatial dimensions (d ≥ 3), the construction of some classes
or types of control volumes may be a complex task and thus may lead
to a time-consuming assembling.

• Di¬cult mathematical analysis (stability, convergence, . . . ).




Exercises
6.1 Given the boundary value problem

’(au ) = 0 in (0, 1) , u(0) = 1 , u(1) = 0 ,

with piecewise constant coe¬cients

κ± , x ∈ (0, ξ) ,
a(x) :=
x ∈ (ξ, 1) ,
±,

where ±, κ are positive constants and ξ ∈ (0, 1) \ Q :

(a) What is the weak solution u ∈ H 1 (0, 1) of this problem?
(b) For general “smooth” coe¬cients a, the di¬erential equation is
obviously equivalent to

’au ’ a u = 0 .

Therefore, the following discretization is suggested:
ui’1 ’ 2ui + ui+1 ai+1 ’ ai’1 ui+1 ’ ui’1
’ai ’ = 0,
2
h 2h 2h
where an equidistant grid with the nodes xi = ih (i = 0, . . . , N + 1)
and ai := a(xi ), ui :≈ u(xi ) is used.
This discretization is also formally correct in the given situation of
discontinuous coe¬cients. Find the discrete solution (ui )N in this
i=1
case.
(c) Under what conditions do the values ui converge to u(xi ) for h ’ 0?
262 6. Finite Volume Method

6.2 The Finite Volume Method for Linear Elliptic
Di¬erential Equations of Second Order on
Triangular Grids
In this section we will explain the development and the analysis of a ¬nite
volume method of “cell-centred” type for a model problem. Here, „¦ ‚ R2
is a bounded, simply connected domain with a polygonal boundary, but
without slits.


6.2.1 Admissible Control Volumes

The Voronoi Diagram
By {ai }i∈Λ ‚ „¦ we denote a consecutively numbered point set that includes
all vertices of „¦, where Λ is the corresponding set of indices. Typically, the
points ai are placed at those positions where the values u(ai ) of the exact
solution u are to be approximated. The convex set
„¦i := x ∈ R2 |x ’ ai | < |x ’ aj |
˜ for all j = i
is called the Voronoi polygon (or Dirichlet domain, Wigner“Seitz cell,
˜
Thiessen polygon, . . . ). The family „¦i i∈Λ is called the Voronoi diagram
of the point set {ai }i∈Λ .

. . .
.. boundary of „¦

. ∼
.. boundary of „¦i


Figure 6.5. Voronoi diagram.

The Voronoi polygons are convex, but not necessarily bounded, sets (con-
sider the situation near the boundary in Figure 6.5). Their boundaries are
polygons. The vertices of these polygons are called Voronoi vertices.
It can be shown that at any Voronoi vertex at least three Voronoi poly-
gons meet. According to this property, Voronoi vertices are classi¬ed into
regular and degenerate Voronoi vertices: In a regular Voronoi vertex, the
boundaries of exactly three Voronoi polygons meet, whereas a degenerate
Voronoi vertex is shared by at least four Voronoi polygons. In the latter
case, all the corresponding nodes ai are located at some circle (they are
“cocyclic”, cf. Figure 6.6).
6.2. Finite Volume Method on Triangular Grids 263



.a2
. . (a1 - a4 are cocyclic)
a3 a5


.
. a1
a4



Figure 6.6. Degenerate and regular Voronoi vertex.

Now the elements „¦i (control volumes) of the partition of „¦ required for
the de¬nition of the ¬nite volume method can be introduced as follows:
„¦i := „¦i © „¦ , i ∈ Λ.
˜

As a consequence, the domains „¦i need not necessarily be convex if „¦ is
nonconvex (cf. Figure 6.5).
Furthermore, the following notation will be used:
j ∈ Λ \ {i} : ‚„¦i © ‚„¦j = … , i ∈ Λ,
Λi :=
for the set of indices of neighbouring nodes,
‚„¦i © ‚„¦j , j ∈ Λi , for a joint piece of the
“ij :=
boundaries of neighbouring control volumes,
mij for the length of “ij .
The dual graph of the Voronoi diagram is de¬ned as follows:
Any pair of points ai , aj such that mij > 0 is connected by a straight-line
segment. In this way, a further partition of „¦ with an interesting property
results.
Theorem 6.4 If all Voronoi vertices are regular, then the dual graph coin-
cides with the set of edges of a triangulation of the convex hull of the given
point set.
This triangulation is called a Delaunay triangulation.
If among the Voronoi vertices there are degenerate ones, then a tri-
angulation can be obtained from the dual graph by a subsequent local
triangulation of the remaining m-polygons (m ≥ 4). A Delaunay triangula-
tion has the interesting property that two interior angles subtended by any
given edge sum to no more than π. In this respect Delaunay triangulations
satisfy the ¬rst part of the angle condition formulated in Section 3.9 for
the maximum principle in ¬nite element methods.
Therefore, if „¦ is convex, then we automatically get a triangulation to-
gether with the Voronoi diagram. In the case of a nonconvex domain „¦,
certain modi¬cations could be required to achieve a correct triangulation.
.
264 6. Finite Volume Method

. .
.. This edge has to be
. removed from the

.. Delaunay triangulation.


Figure 6.7. Delaunay triangulation to the Voronoi diagram from Figure 6.5.

The implication

Voronoi diagram Delaunay triangulation ,
which we have just discussed, suggests that we ask about the converse
statement. We do not want to answer it completely at this point, but we
give the following su¬cient condition.
Theorem 6.5 If a conforming triangulation of „¦ (in the sense of ¬nite
element methods) consists of nonobtuse triangles exclusively, then it is
a Delaunay triangulation, and the corresponding Voronoi diagram can be
constructed by means of the perpendicular bisectors of the triangles™ edges.
We mention that the centre of the circumcircle of a nonobtuse triangle is
located within the closure of that triangle.
In the analysis of the ¬nite volume method, the following relation is
important.
Lemma 6.6 Given a nonobtuse triangle K with vertices aik , k ∈ {1, 2, 3},
then for the corresponding parts „¦ik ,K := „¦ik © K of the control volumes
„¦ik , we have
1 1
|K| ¤ |„¦ik ,K | ¤ |K| , k ∈ {1, 2, 3} .
4 2

The Donald diagram
In contrast to the Voronoi diagram, where the construction starts from a
given point set, the starting point here is a triangulation Th of „¦, which is
allowed to contain obtuse triangles.
Again, let K be a triangle with vertices aik , k ∈ {1, 2, 3}. We de¬ne
„¦ik ,K := x ∈ K »j (x) < »k (x), j = k ,
where »k denote the barycentric coordinates with respect to aik (cf. (3.51)).
Obviously, the barycentre satis¬es aS = 1 (ai1 + ai2 + ai3 ), and (see, for
3
comparison, Lemma 6.6)
3 |„¦ik ,K | = |K| , k ∈ {1, 2, 3} . (6.4)
This relation is a simple consequence of the geometric interpretation of
the barycentric coordinates as area coordinates given in Section 3.3. The
6.2. Finite Volume Method on Triangular Grids 265

.
ai2

„¦i2 ,K
.
. aS
„¦i1 ,K ai1
„¦ i3 ,K
.
ai3

Figure 6.8. The subdomains „¦ik ,K .


required control volumes are de¬ned as follows (see Figure 6.8):

i ∈ Λ.
„¦i := int „¦i,K ,
K:‚K ai

The family {„¦i }i∈Λ is called a Donald diagram.
The quantities “ij , mij , and Λi are de¬ned similarly as in the case of
the Voronoi diagram. We mention that the boundary pieces “ij are not
necessarily straight, but polygonal in general.


6.2.2 Finite Volume Discretization
The model under consideration is a special case of equation (6.1). Instead
of the matrix-valued di¬usion coe¬cient K we will take a scalar coe¬cient
k : „¦ ’ R, that is, K = kI. Moreover, homogeneous Dirichlet boundary
conditions are to be satis¬ed. So the boundary value problem reads as
follows:
’∇ · (k ∇u ’ c u) + r u =f in „¦ ,
(6.5)
u =0 on ‚„¦ ,

with k, r, f : „¦ ’ R, c : „¦ ’ R2 .

The Case of the Voronoi Diagram
Let the domain „¦ be partioned by a Voronoi diagram and the correspond-
ing Delaunay triangulation. Due to the homogeneous Dirichlet boundary
conditions, it is su¬cient to consider only those control volumes „¦i that
are associated with inner nodes ai ∈ „¦. Therefore, we denote the set of
indices of all inner nodes by
Λ := i ∈ Λ ai ∈ „¦ .
In the ¬rst step, the di¬erential equation (6.5) is integrated over the single
control volumes „¦i :

’ ∇ · (k ∇u ’ c u) dx + i ∈ Λ.
r u dx = f dx , (6.6)
„¦i „¦i „¦i
266 6. Finite Volume Method

The application of Gauss™s divergence theorem to the ¬rst integral of the
left-hand side of (6.6) yields

∇ · (k ∇u ’ c u) dx = ν · (k ∇u ’ c u) dσ .
„¦i ‚„¦i

Due to ‚„¦i = ∪j∈Λi “ij (cf. Figure 6.9), it follows that

∇ · (k ∇u ’ c u) dx = νij · (k ∇u ’ c u) dσ ,
„¦i “ij
j∈Λi

where νij is the (constant) outer unit normal to “ij (with respect to „¦i ).
In the next step we approximate the line integrals over “ij .
.
.
. ai
“ij
aj

.
Figure 6.9. The edge “ij .

First, the coe¬cients k and νij · c are approximated on “ij by constants
µij > 0, respectively γij :
k|“ij ≈ µij = const > 0 , νij · c|“ij ≈ γij = const .
In the simplest case, the approximation can be realized by the correspond-
ing value at the midpoint a“ij of the straight-line segment “ij . A better
choice is
±
1 νij · c dσ , mij > 0 ,
mij “ij
γij := (6.7)

νij · c(a“ij ) , mij = 0 .
We thus obtain

∇ · (k ∇u ’ c u) dx ≈ [µij (νij · ∇u) ’ γij u] dσ .
„¦i “ij
j∈Λi

The normal derivatives are approximated by di¬erence quotients; that is,
u(aj ) ’ u(ai )
νij · ∇u ≈ with dij := |ai ’ aj | .
dij
This formula is exact for such functions that are linear along the straight-
line segment between the points ai , aj . So it remains to approximate the
integral of u over “ij . For this, a convex combination of the values of u at
6.2. Finite Volume Method on Triangular Grids 267

the nodes ai and aj is taken:
u|“ij ≈ rij u(ai ) + (1 ’ rij ) u(aj ) ,
where rij ∈ [0, 1] is a parameter to be de¬ned subsequently. In general, rij
depends on µij , γij , and dij .
Collecting all the above approximations, we arrive at the following
relation:

∇ · (k ∇u ’ c u) dx
„¦i
u(aj ) ’ u(ai )
≈ ’ γij [rij u(ai ) + (1 ’ rij ) u(aj )]
µij mij .
dij
j∈Λi

To approximate the remaining integrals from (6.6), the following formulas
are used:

r u dx ≈ with mi := |„¦i | ,
r(ai ) u(ai ) mi =: ri u(ai ) mi ,
„¦i


f dx f (ai ) mi =: fi mi .
„¦i

Instead of ri := r(ai ) or fi := f (ai ), the approximations
1 1
ri := r dx respectively fi := f dx (6.8)
mi mi
„¦i „¦i

can also be used. Denoting the unknown approximate values for u(ai ) by
ui , we obtain the following linear system of equations:
ui ’ uj
+ γij [rij ui + (1 ’ rij ) uj ] mij + ri ui mi
µij
dij (6.9)

<<

. 10
( 16)



>>