<<

. 16
( 16)



Note that the spectral radius (A) is not a matrix norm, as the following
simple example shows:
01
For A = , we have that A = 0 but (A) = 0.
00
However, for any matrix norm · the following relation is valid:
(A) ¤ A . (A3.8)
Very often, matrices and vectors simultaneously appear as a product
Ax. In order to be able to handle such situations, there should be a certain
correlation between matrix and vector norms.
A matrix norm · is called mutually consistent or compatible with the
vector norm | · | if the inequality
|Ax| ¤ A |x| (A3.9)
is valid for all x ∈ Rn and all A ∈ Rn,n .
A.3. Linear Algebra 397

Examples of mutually consistent norms are
with |x|∞ ,
A or A ∞
G


with |x|1 ,
A or A
G 1


with |x|2 .
A or A
G F

In many cases, the bound for |Ax| given by (A3.9) is not sharp enough;
i.e., for x = 0 we just have that
|Ax| < A |x| .
Therefore, the question arises of how to ¬nd, for a given vector norm, a
compatible matrix norm such that in (A3.9) the equality holds for at least
one element x = 0.
Given a vector norm |x|, the number
|Ax|
|Ax|
A := sup = sup
x∈Rn \{0} |x| x∈Rn : |x|=1

is called the induced or subordinate matrix norm.
The induced norm is a compatible matrix norm with the given vector
norm. It is the smallest norm among all matrix norms that are compatible
with the given vector norm |x|.
To illustrate the de¬nition of the induced matrix norm, the matrix norm
induced by the Euclidean vector norm is derived:

:= max |Ax|2 = max xT (AT A)x = »max (AT A) = (AT A) .
A 2
|x|2 =1 |x|2 =1
(A3.10)
The matrix norm A 2 induced by the Euclidean vector norm is also called
the spectral norm. This term becomes understandable in the special case of
a symmetric matrix A. If »1 , . . . , »n denote the real eigenvalues of A, then
the matrix AT A = A2 has the eigenvalues »2 satisfying
i

= |»max (A)| .
A 2

For symmetric matrices, the spectral norm coincides with the spectral ra-
dius. Because of (A3.8), it is the smallest possible matrix norm in that
case.
As a further example, the maximum row sum A ∞ is the matrix norm
induced by the maximum norm |x|∞ .
The number
A’1
κ(A) := A
is called the condition number of the matrix A with respect to the matrix
norm under consideration.
398 A. Appendices

The following relation holds:
1 ¤ I = AA’1 ¤ A A’1 .
For | · | = | · |p , the condition number is also denoted by κp (A). If all
eigenvalues of A are real, the number
κ(A) := »max (A)/»min (A)
is called the spectral condition number. Hence, for a symmetric matrix A
the equality κ(A) = κ2 (A) is valid.
Occasionally, it is necessary to estimate small perturbations of nonsin-
gular matrices. For this purpose, the following result is useful (perturbation
lemma or Neumann™s lemma). Let A ∈ Rn,n satisfy A < 1 with respect
to an arbitrary, but ¬xed, matrix norm. Then the inverse of I ’ A exists
and can be represented as a convergent power series of the form

’1
(I ’ A) Aj ,
=
j=0

with
1
(I ’ A)’1 ¤ . (A3.11)
1’ A

Special Matrices
The matrix A ∈ Rn,n is called an upper, respectively lower, triangular
matrix if its entries satisfy aij = 0 for i > j, respectively aij = 0 for i < j.
A matrix H ∈ Rn,n is called an (upper) Hessenberg matrix if it has the
following structure:
« 
h11

¬ ·
¬ h21 . . . ·
¬ ·
¬ ·
.. ..
H := ¬ ·
. .
¬ ·
¬ ·
.. ..
¬ ·
. .
 
0 hnn’1 hnn
(that is, hij = 0 for i > j + 1).
The matrix A ∈ Rn,n satis¬es the strict row sum criterion (or is strictly
row diagonally dominant) if it satis¬es
n
|aij | < |aii | for all i = 1, . . . , n .
j=1
j=i


It satis¬es the strict column sum criterion if the following relation holds:
n
|aij | < |ajj | for all j = 1, . . . , n .
i=1
i=j
A.4. Some De¬nitions and Arguments of Linear Functional Analysis 399

The matrix A ∈ Rn,n satis¬es the weak row sum criterion (or is weakly row
diagonally dominant) if
n
|aij | ¤ |aii | holds for all i = 1, . . . , n
j=1
j=i

and the strict inequality “<” is valid for at least one number
i ∈ {1, . . . , n} .
The weak column sum criterion is de¬ned similarly.
The matrix A ∈ Rn,n is called reducible if there exist subsets N1 , N2 ‚
{1, . . . , n} with N1 © N2 = …, N1 = … = N2 , and N1 ∪ N2 = {1, . . . , n} such
that the following property is satis¬ed:
For all i ∈ N1 , j ∈ N2 : aij = 0 .
A matrix that is not reducible is called irreducible.
A matrix A ∈ Rn,n is called an L0 -matrix if for i, j ∈ {1, . . . , n} the
inequalities
aii ≥ 0 and aij ¤ 0 (i = j)
are valid. An L0 -matrix is called an L-matrix if all diagonal entries are
positive.
A matrix A ∈ Rn,n is called monotone (or of monotone type) if the
relation Ax ¤ Ay for two (otherwise arbitrary) elements x, y ∈ Rn implies
x ¤ y. Here the relation sign is to be understood componentwise.
A matrix of monotone type is invertible.
A matrix A ∈ Rn,n is a matrix of monotone type if it is invertible and
all entries of the inverse are nonnegative.
An important subclass of matrices of monotone type is formed by the
so-called M-matrices.
A monotone matrix A with aij ¤ 0 for i = j is called an M-matrix.
Let A ∈ Rn,n be a matrix with aij ¤ 0 for i = j and aii ≥ 0 (i, j ∈
{1, . . . , n}). In addition, let A satisfy one of the following conditions:
(i) A satis¬es the strict row sum criterion.
(ii) A satis¬es the weak row sum criterion and is irreducible.
Then A is an M-matrix.


A.4 Some De¬nitions and Arguments of Linear
Functional Analysis
Working with vector spaces whose elements are (classical or generalized)
functions, it is desirable to have a measure for the “length” or “magnitude”
of a function, and, as a consequence, for the distance of two functions.
400 A. Appendices

Let V be a real vector space (in short, an R vector space) and let ·
be a real-valued mapping · : V ’ R.
The pair (V, · ) is called a normed space (“V is endowed with the norm
· ”) if the following properties hold:
u ≥ 0 for all u ∈ V , u = 0 ” u = 0, (A4.1)
±u = |±| u for all ± ∈ R , u ∈ V , (A4.2)
u+v ¤ u + v for all u, v ∈ V . (A4.3)
The property (A4.1) is called de¬niteness; (A4.3) is called the triangle
inequality. If a mapping · : V ’ R satis¬es only (A4.2) and (A4.3), it
is called a seminorm. Due to (A4.2), we still have 0 = 0, but there may
exist elements u = 0 with u = 0.
A particularly interesting example of a norm can be obtained if the space
V is equipped with a so-called scalar product. This is a mapping ·, · :
V — V ’ R with the following properties:
(1) ·, · is a bilinear form, that is,
for all u, v1 , v2 ∈ V ,
u, v1 + v2 = u, v1 + u, v2
(A4.4)
for all u, v ∈ V, ± ∈ R ,
u, ±v = ± u, v
and an analogous relation is valid for the ¬rst argument.
(2) ·, · is symmetric, that is,
for all u, v ∈ V .
u, v = v, u (A4.5)
(3) ·, · is positive, that is,
u, u ≥ 0 for all u ∈ V . (A4.6)
(4) ·, · is de¬nite, that is,
u, u = 0 ” u = 0 . (A4.7)
A positive and de¬nite bilinear form is called positive de¬nite.
A scalar product ·, · de¬nes a norm on V in a natural way if we set
1/2
v := v, v . (A4.8)
In absence of the de¬niteness (A4.7), only a seminorm is induced.
A norm (or a seminorm) induced by a scalar product (respectively by a
symmetric and positive bilinear form) has some interesting properties. For
example, it satis¬es the Cauchy“Schwarz inequality, that is,
| u, v | ¤ u for all u, v ∈ V ,
v (A4.9)
and the parallelogram identity
+ u’v + v 2 ) for all u, v ∈ V .
2 2 2
u+v = 2( u (A4.10)
A.4. Linear Functional Analysis 401

Typical examples of normed spaces are the spaces Rn equipped with one
of the p -norms (for some ¬xed p ∈ [1, ∞]). In particular, the Euclidean
norm (A3.3) is induced by the Euclidean scalar product
(x, y) ’ x · y for all x, y ∈ Rn . (A4.11)
On the other hand, in¬nite-dimensional function spaces play an important
role (see Appendix A.5).
If a vector space V is equipped with a scalar product ·, · , then, in
analogy to Rn , an element u ∈ V is said to be orthogonal to v ∈ V if
u, v = 0 . (A4.12)
Given a normed space (V, · ), it is easy to de¬ne the concept of convergence
of a sequence (ui )i in V to u ∈ V :
ui ’ u for i ’ ∞ ⇐’ ui ’ u ’ 0 for i ’ ∞ . (A4.13)
Often, it is necessary to consider function spaces endowed with di¬erent
norms. In such situations, di¬erent kinds of convergence may occur. How-
ever, if the corresponding norms are equivalent, then there is no change
in the type of convergence. Two norms · 1 and · 2 in V are called
equivalent if there exist constants C1 , C2 > 0 such that
¤u ¤ C2 u for all u ∈ V .
C1 u (A4.14)
1 2 1

If there is only a one-sided inequality of the form
¤C u for all u ∈ V
u (A4.15)
2 1

with a constant C > 0, then the norm · 1 is called stronger than the
norm · 2 .
In a ¬nite-dimensional vector space, all norms are equivalent. Examples
can be found in Appendix A.3. In particular, it is important to observe that
the constants may depend on the dimension n of the ¬nite-dimensional
vector space. This observation also indicates that in the case of in¬nite-
dimensional vector spaces, the equivalence of two di¬erent norms cannot
be expected, in general.
As a consequence of (A4.14), two equivalent norms · 1 , · 2 in V yield
the same type of convergence:
ui ’ u w.r.t. · ” ui ’ u ’0
1 1

” ui ’ u ’0 ” ui ’ u w.r.t.
· 2.
2
(A4.16)
In this book, the ¬nite-dimensional vector space R is used in two as-
n

pects: For n = d, it is the basic space of independent variables, and for
n = M or n = m it represents the ¬nite-dimensional trial space. In the
¬rst case, the equivalence of all norms can be used in all estimates without
any side e¬ects, whereas in the second case the aim is to obtain uniform
402 A. Appendices

estimates with respect to all M and m, and so the dependence of the
equivalence constants on M and m has to be followed thoroughly.
Now we consider two normed spaces (V, · V ) and (W, · W ). A mapping
f : V ’ W is called continuous in v ∈ V if for all sequences (vi )i in V with
vi ’ v for i ’ ∞ we get
f (vi ) ’ f (v) for i ’ ∞.
Note that the ¬rst convergence is measured in · V and the second
one in · W . Hence a change of the norm may have an in¬‚uence on the
continuity. As in classical analysis, we can say that
f is continuous in all v ∈ V ⇐’
(A4.17)
f ’1 [G] is closed for each closed G ‚ W .
Here, a subset G ‚ W of a normed space W is called closed if for any
sequence (ui )i from G such that ui ’ u for i ’ ∞ the inclusion u ∈
G follows. Because of (A4.17), the closedness of a set can be veri¬ed by
showing that it is a continuous preimage of a closed set.
The concept of continuity is a qualitative relation between the preimage
and the image. A quantitative relation is given by the stronger notion of
Lipschitz continuity:
A mapping f : V ’ W is called Lipschitz continuous if there exists a
constant L > 0, the Lipschitz constant, such that
f (u) ’ f (v) ¤ L u’v for all u, v ∈ V . (A4.18)
W V



slope: L
admissible region for f(y)

f
slope: -L

x

Figure A.1. Lipschitz continuity (for V = W = R).

A Lipschitz continuous mapping with L < 1 is called contractive or a
contraction; cf. Figure A.1.
Most of the mappings used are linear; that is, they satisfy
f (u + v) = f (u) + f (v) ,
for all u, v ∈ V and » ∈ R . (A4.19)
f (»u) = »f (u) ,
For a linear mapping, the Lipschitz continuity is equivalent to the
boundedness; that is, there exists a constant C > 0 such that
¤C u for all u ∈ V .
f (u) (A4.20)
W V
A.4. Linear Functional Analysis 403

In fact, for a linear mapping f, the continuity at one point is equivalent
to (A4.20). Linear, continuous mappings acting from V to W are also
called (linear, continuous) operators and are denoted by capital letters,
for example S, T, . . . .
In the case V = W = Rn , the linear, continuous operators in Rn are
the mappings x ’ Ax de¬ned by matrices A ∈ Rn,n . Their boundedness,
for example with respect to · V = · W = · ∞ , is an immediate
consequence of the compatibility property of the · ∞ -norm. Moreover,
since all norms in Rn are equivalent, these mappings are bounded with
respect to any norms in Rn .
Similarly to (A4.20), a bilinear form f : V — V ’ R is continuous if it is
bounded, that is, if there exists a constant C > 0 such that
|f (u, v)| ¤ C u for all u, v ∈ V .
v (A4.21)
V V

In particular, due to (A4.9) any scalar product is continuous with respect
to the induced norm of V ; that is,
ui ’ u , vi ’ v ’ ui , vi ’ u, v . (A4.22)
Now let (V, · V ) be a normed space and W a subspace that is (addi-
tionally to · V ) endowed with the norm · W . The embedding from
(W, · W ) to (V, · V ) , i.e., the linear mapping that assigns any element
of W to itself but considered as an element of V, is continuous i¬ the norm
· W is stronger than the norm · V (cf. (A4.15)).
The collection of linear, continuous operators from (V, · V ) to (W, · W )
forms an R vector space with the following (argumentwise) operations:
for all u ∈ V ,
(T + S)(u) := T (u) + S(u)
for all u ∈ V ,
(»T )(u) := »T (u)
for all operators T, S and » ∈ R. This space is denoted by

L[V, W ] . (A4.23)
In the special case W = R, the corresponding operators are called linear,
continuous functionals, and the notation

V := L[V, R] (A4.24)
is used. The R vector space L[V, W ] can be equipped with a norm, the
so-called operator norm, by

u∈V , u ¤1 for T ∈ L[V, W ] . (A4.25)
T := sup T (u) W V

Here T is the smallest constant such that (A4.20) holds. Speci¬cally, for
a functional f ∈ V , we have that
f = sup |f (u)| ¤1 .
u V
404 A. Appendices

For example, in the case V = W = Rn and u V = u W , the norm of a
linear, bounded operator that is represented by a matrix A ∈ Rn,n coincides
with the corresponding induced matrix norm (cf. Appendix A.3).
Let (V, · V ) be a normed space. A sequence (ui )i in V is called a Cauchy
sequence if for any µ > 0 there exists a number n0 ∈ N such that
ui ’ uj ¤µ for all i, j ∈ N with i, j ≥ n0 .
V

The space V is called complete or a Banach space if for any Cauchy sequence
(ui )i in V there exists an element u ∈ V such that ui ’ u for i ’ ∞. If
the norm · V of a Banach space V is induced by a scalar product, then
V is called a Hilbert space.
A subspace W of a Banach space is complete i¬ it is closed. A basic
problem in the variational treatment of boundary value problems consists
in the fact that the space of continuous functions (cf. the preliminary de¬-
nition (2.7)), which is required to be taken as a basis, is not complete with
respect to the norm ( · l , l = 0 or l = 1). However, if in addition to the
normed space (W, · ), a larger space V is given that is complete with
respect to the norm · , then that space or the closure
W := W (A4.26)
(as the smallest Banach space containing W ) can be used. Such a com-
pletion can be introduced for any normed space in an abstract way. The
problem is that the “nature” of the limiting elements remains vague.
If the relation (A4.26) is valid for some normed space W, then W is
called dense in W . In fact, given W, all “essential” elements of W are
already captured. For example, if T is a linear, continuous operator T from
(W , · ) to another normed space, then the identity
T (u) = 0 for all u ∈ W (A4.27)
is su¬cient for
for all u ∈ W .
T (u) = 0 (A4.28)
The space of linear, bounded operators is complete if the image space is
complete. In particular, the space V of linear, bounded functionals on the
normed space V is always complete.


A.5 Function Spaces
In this section G ‚ Rd denotes a bounded domain.
The function space C(G) contains all (real-valued) functions de¬ned on
G that are continuous in G. By C l (G), l ∈ N, the set of l-times continuously
di¬erentiable functions on G is denoted. Usually, for the sake of consistency,

the conventions C 0 (G) := C(G) and C ∞ (G) := l=0 C l (G) are used.
A.5. Function Spaces 405

Functions from C l (G), l ∈ N0 , and C ∞ (G) need not be bounded, as for
d = 1 the example f (x) := x’1 , x ∈ (0, 1) shows.
To overcome this di¬culty, further spaces of continuous functions are
introduced. The space C(G) contains all bounded and uniformly contin-
uous functions on G, whereas C l (G), l ∈ N, consists of functions with
bounded and uniformly continuous derivatives up to order l on G. Here the

conventions C 0 (G) := C(G) and C ∞ (G) := l=0 C l (G) are used, too.
The space C0 (G), respectively C0 (G), l ∈ N, denotes the set of all those
l

continuous, respectively l-times continuously di¬erentiable, functions, the
supports of which are contained in G. Often this set is called the set of
functions with compact support in G. Since G is bounded, this means that
0
the supports do not intersect boundary points of G. We also set C0 (G) :=
C0 (G) and C0 (G) := C0 (G) © C ∞ (G).


The linear space Lp (G), p ∈ [1, ∞), contains all Lebesgue measurable
functions de¬ned on G whose pth power of their absolute value is Lebesgue
integrable on G. The norm in Lp (G) is de¬ned as follows:
1/p
|u| dx p ∈ [1, ∞) .
p
u := ,
0,p,G
G

In the case p = 2, the speci¬cation of p is frequently omitted; that is,
u 0,G = u 0,2,G. The L2 (G)-scalar product

u, v ∈ L2 (G) ,
u, v := uv dx ,
0,G
G


induces the L2 (G)-norm by setting u := u, u 0,G .
0,G

The space L∞ (G) contains all measurable, essentially bounded functions
on G, where a function u : G ’ R is called essentially bounded if the
quantity

sup |u(x)|
u := inf
∞,G
G0 ‚G: |G0 |d =0 x∈G\G0


is ¬nite. For continuous functions, this norm coincides with the usual
maximum norm:

= max |u(x)| , u ∈ C(G) .
u ∞,G
x∈G

For 1 ¤ q ¤ p ¤ ∞, we have Lp (G) ‚ Lq (G), and the embedding is
continuous.
The space Wp (G), l ∈ N, p ∈ [1, ∞], consists of all l-times weakly di¬er-
l

entiable functions from Lp (G) with derivatives in Lp (G). In the special case
p = 2, we also write H l (G) := W2 (G). In analogy to the case of continuous
l

functions, the convention H 0 (G) := L2 (G) is used. The norm in Wp (G) is
l
406 A. Appendices

de¬ned as follows:
1/p

|‚ u| dx p ∈ [1, ∞) ,
± p
u := ,
l,p,G
G
|±|¤l

max |‚ ± u|∞,G .
u :=
l,∞,G
|±|¤l

In H l (G) a scalar product can be de¬ned by

u, v ∈ H l (G) .
‚ ± u‚ ± v dx ,
u, v :=
l,G
G
|±|¤l

· l ∈ N:
The norm induced by this scalar product is denoted by l,G ,


u := u, u .
l,G l,G

For l ∈ N, the symbol | · |l,G stands for the corresponding H l (G)-seminorm:


|u|l,G := |‚ ± u|2 dx .
G
|±|=l


1
The space H0 (G) is de¬ned as the closure (or completion) of C0 (G) in the
norm · 1 of H 1 (G).

Convention: Usually, in the case G = „¦ the speci¬cation of the domain
in the above norms and scalar products is omitted.
In the study of partial di¬erential equations, it is often desirable to speak
of boundary values of functions de¬ned on the domain G. In this respect,
the Lebesgue spaces of functions that are square integrable at the bound-
ary of G are important. To introduce these spaces, some preparations are
necessary.
x
In what follows, a point x ∈ Rd is written in the form x = xd with
x = (x1 , . . . , xd’1 )T ∈ Rd’1 .
A domain G ‚ Rd is said to be located at one side of ‚G if for any x ∈ ‚G
there exist an open neighbourhood Ux ‚ Rd and an orthogonal mapping
Qx in Rd such that the point x is mapped to a point x = (ˆ1 , . . . , xd )T ,
ˆ x ˆ
and so Ux is mapped onto a neighbourhood Ux ‚ R of x, where in the
d
ˆ
ˆ
neighbourhood Ux the following properties hold:
ˆ

(1) The image of Ux © ‚G is the graph of some function Ψx : Yx ‚
Rd’1 ’ R; that is, xd = Ψx (ˆ1 , . . . , xd’1 ) = Ψx (ˆ ) for x ∈ Yx .
ˆ x ˆ x ˆ
(2) The image of Ux © G is “above this graph” (i.e., the points in Ux © G
correspond to xd > 0).
ˆ
A.5. Function Spaces 407

(3) The image of Ux © (Rd \ G) is “below this graph” (i.e., the points in
Ux © (Rd \ G) correspond to xd < 0).
ˆ

A domain G that is located at one side of ‚G is called a C l domain, l ∈ N,
respectively a Lipschitz(ian) domain, if all Ψx are l-times continuously
di¬erentiable, respectively Lipschitz continuous, in Yx .
Bounded Lipschitz domains are also called strongly Lipschitz.
For bounded domains located at one side of ‚G, it is well known (cf.,
e.g. [37]) that from the whole set of neighbourhoods {Ux }x∈‚G there can be
selected a family {Ui }n of ¬nitely many neighbourhoods covering ‚G, i.e.,
i=1
n
n ∈ N and ‚G ‚ i=1 Ui . Furthermore, for any such family there exists a

system of functions {•i }n with the properties •i ∈ C0 (Ui ), •i (x) ∈ [0, 1]
i=1
n
for all x ∈ Ui and i=1 •i (x) = 1 for all x ∈ ‚G. Such a system is called
a partition of unity.
If the domain G is at least Lipschitzian, then Lebesgue™s integral over
the boundary of G is de¬ned by means of those partitions of unity. In cor-
respondence to the de¬nition of a Lipschitz domain, Qi , Ψi , and Yi denote
the orthogonal mapping on Ui , the function describing the corresponding
local boundary, and the preimage of Qi (Ui © ‚G) with respect to Ψi .
A function v : ‚G ’ R is called Lebesgue integrable over ‚G if the
composite functions x ’ v QT Ψ xx ) ˆ belong to L1 (Yi ). The integral
ˆ
i (ˆ
i
is de¬ned as follows:
n
v(s) ds := v(s)•i (s) ds
‚G ‚G
i=1
n
v QT Ψ xx )
ˆ •i QT Ψ xx )
ˆ
:= i i
i (ˆ i (ˆ
Yi
i=1

— |det(‚j Ψi (ˆ )‚k Ψi (ˆ ))d’1 | dˆ .
x x j,k=1 x

A function v : ‚G ’ R belongs to L2 (‚G) i¬ both v and v 2 are Lebesgue
integrable over ‚G.
In the investigation of time-dependent partial di¬erential equations, lin-
ear spaces whose elements are functions of the time variable t ∈ [0, T ],
T > 0, with values in a normed space X are of interest.
A function v : [0, T ] ’ X is called continuous on [0, T ] if for all t ∈ [0, T ]
the convergence v(t + k) ’ v(t) X ’ 0 as k ’ 0 holds.
The space C([0, T ], X) = C 0 ([0, T ], X) consists of all continuous
functions v : [0, T ] ’ X such that

<∞.
sup v(t) X
t∈(0,T )

The space C l ([0, T ], X), l ∈ N, consists of all continuous functions v :
[0, T ] ’ X that have continuous derivatives up to order l on [0, T ] with the
408 A. Appendices

norm
l
v (i) (t)
sup .
X
i=0 t∈(0,T )

The space Lp ((0, T ), X) with 1 ¤ p ¤ ∞ consists of all functions on (0, T )—
„¦ for which
v(t, ·) ∈ X for any t ∈ (0, T ) , F ∈ Lp (0, T ) with F (t) := v(t, ·) .
X

Furthermore,
v := F .
Lp ((0,T ),X) Lp (0,T )
References: Textbooks and
Monographs




[1] R.A. Adams. Sobolev Spaces. Academic Press, New York, 1975.
[2] M. Ainsworth and J.T. Oden. A Posteriori Error Estimation in Finite
Element Analysis. Wiley, New York, 2000.
[3] O. Axelsson and V.A. Barker. Finite Element Solution of Boundary
Value Problems. Theory and Computation. Academic Press, Orlando, 1984.
[4] R.E. Bank. PLTMG, a Software Package for Solving Elliptic Partial Dif-
ferential Equations: Users Guide 7.0. SIAM, Philadelphia, 1994. Frontiers
in Applied Mathematics, Vol. 15.
[5] A. Berman and R.J. Plemmons. Nonnegative Matrices in the Mathemat-
ical Sciences. Academic Press, New York, 1979.
[6] D. Braess. Finite Elements. Theory, Fast Solvers, and Applications in
Solid Mechanics. Cambridge University Press, Cambridge, 2001 (2nd ed.).
[7] S.C. Brenner and L.R. Scott. The Mathematical Theory of Finite El-
ement Methods. Springer, New York“Berlin“Heidelberg, 2002 (2nd ed.).
Texts in Applied Mathematics, Vol. 15.
[8] V.I. Burenkov. Sobolev Spaces on Domains. Teubner, Stuttgart, 1998.
[9] P.G. Ciarlet. Basic Error Estimates for Elliptic Problems. In: P.G. Ciarlet
and J.L. Lions, editors, Handbook of Numerical Analysis, Volume II: Finite
Element Methods (Part 1). North-Holland, Amsterdam, 1991.
[10] A.J. Chorin and J.E. Marsden. A Mathematical Introduction to Fluid
Mechanics. Springer, Berlin“Heidelberg“New York, 1993.
[11] R. Dautray and J.-L. Lions. Mathematical Analysis and Numerical
Methods for Science and Technology. Volume 4: Integral Equations and
Numerical Methods. Springer, Berlin“Heidelberg“New York, 1990.
410 References: Textbooks and Monographs

[12] L.C. Evans. Partial Di¬erential Equations. American Mathematical Soci-
ety, Providence, 1998.
[13] D. Gilbarg and N.S. Trudinger. Elliptic Partial Di¬erential Equations
of Second Order. Springer, Berlin“Heidelberg“New York, 1983 (2nd ed.).
[14] V. Girault and P.-A. Raviart. Finite Element Methods for Navier-
Stokes Equations. Springer, Berlin“Heidelberg“New York, 1986.
[15] W. Hackbusch. Elliptic Di¬erential Equations. Theory and Numerical
Treatment. Springer, Berlin“Heidelberg“New York, 1992.
[16] W. Hackbusch. Iterative Solution of Large Sparse Systems of Equations.
Springer, New York, 1994.
[17] W. Hackbusch. Multi-Grid Methods and Applications. Springer, Berlin“
Heidelberg“New York, 1985.
[18] L.A. Hageman and D.M. Young. Applied Iterative Methods. Academic
Press, New York“London“Toronto“Sydney“San Francisco, 1981.
[19] U. Hornung, ed.. Homogenization and Porous Media. Springer, New York,
1997.
[20] T. Ikeda. Maximum Principle in Finite Element Models for Convection“
Di¬usion Phenomena. North-Holland, Amsterdam“New York“Oxford, 1983.
[21] C. Johnson. Numerical Solution of Partial Di¬erential Equations by
the Finite Element Method. Cambridge University Press, Cambridge“New
York“New Rochelle“Melbourne“Sydney, 1987.
[22] C.T. Kelley. Iterative Methods for Linear and Nonlinear Equations.
SIAM, Philadelphia, 1995.
[23] P. Knupp and S. Steinberg. Fundamentals of Grid Generation. CRC
Press, Boca Raton, 1993.
[24] J.D. Logan. Transport Modeling in Hydrogeochemical Systems. Springer,
New York“Berlin“Heidelberg, 2001.
´
[25] J. Necas. Les M´thodes Directes en Th´orie des Equations Elliptiques.
e e
ˇ
Masson/Academia, Paris/Prague, 1967.
[26] M. Renardy and R.C. Rogers. An Introduction to Partial Di¬erential
Equations. Springer, New York, 1993.
[27] H.-G. Roos, M. Stynes, and L. Tobiska. Numerical Methods for Sin-
gularly Perturbed Di¬erential Equations. Springer, Berlin“Heidelberg“New
York, 1996. Springer Series in Computational Mathematics, Vol. 24.
[28] Y. Saad. Iterative Methods for Sparse Linear Systems. PWS Publ. Co.,
Boston, 1996.
[29] D.H. Sattinger. Topics in Stability and Bifurcation Theory. Springer,
Berlin“Heidelberg“New York, 1973.
[30] J. Stoer. Introduction to Numerical Analysis. Springer, Berlin“Heidelberg“
New York, 1996 (2nd ed.).
[31] G. Strang and G.J. Fix. An Analysis of the Finite Element Method.
Wellesley-Cambridge Press, Wellesley, 1997 (3rd ed.).
[32] J.C. Strikwerda. Finite Di¬erence Schemes and Partial Di¬erential
Equations. Wadsworth & Brooks/Cole, Paci¬c Grove, 1989.
References: Textbooks and Monographs 411

[33] J.F. Thompson, Z.U.A. Warsi, and C.W. Mastin. Numerical Grid
Generation: Foundations and Applications. North-Holland, Amsterdam,
1985.
[34] R.S. Varga. Matrix Iterative Analysis. Springer, Berlin“Heidelberg“New
York, 2000.
[35] R. Verfurth. A Review of A Posteriori Error Estimation and Adaptive
¨
Mesh-Re¬nement Techniques. Wiley and Teubner, Chichester“New York“
Brisbane“Toronto“Singapore and Stuttgart“Leipzig, 1996.
[36] S. Whitaker. The Method of Volume Averaging. Kluwer Academic
Publishers, Dordrecht, 1998.
[37] J. Wloka. Partial Di¬erential Equations. Cambridge University Press,
New York, 1987.
[38] D.M. Young. Iterative Solution of Large Linear Systems. Academic Press,
New York, 1971.
[39] E. Zeidler. Nonlinear Functional Analysis and Its Applications. II/A:
Linear Monotone Operators. Springer, Berlin“Heidelberg“New York, 1990.
References: Journal Papers




[40] L. Angermann. Error estimates for the ¬nite-element solution of an elliptic
singularly perturbed problem. IMA J. Numer. Anal., 15:161“196, 1995.
[41] T. Apel and M. Dobrowolski. Anisotropic interpolation with applica-
tions to the ¬nite element method. Computing, 47:277“293, 1992.
[42] D.G. Aronson. The porous medium equation. In: A. Fasano and M. Prim-
icerio, editors, Nonlinear Di¬usion Problems. Lecture Notes in Mathematics
1224:1“46, 1986.
[43] M. Bause and P. Knabner. Uniform error analysis for Lagrange“Galerkin
approximations of convection-dominated problems. SIAM J. Numer. Anal.,
39(6):1954“1984, 2002.
[44] R. Becker and R. Rannacher. A feed-back approach to error control in
¬nite element methods: Basic analysis and examples. East-West J. Numer.
Math., 4(4):237“264, 1996.
[45] C. Bernardi, Y. Maday, and A.T. Patera. A new nonconforming ap-
proach to domain decomposition: the mortar element method. In: H. Brezis
and J.-L. Lions, editors, Nonlinear Partial Di¬erential Equations and Their
Applications. Longman, 1994.
[46] T.D. Blacker and R.J. Meyers. Seams and wedges in plastering: A
3-D hexahedral mesh generation algorithm. Engineering with Computers,
9:83“93, 1993.
[47] T.D. Blacker and M.B. Stephenson. Paving: A new approach to auto-
mated quadrilateral mesh generation. Internat. J. Numer. Methods Engrg.,
32:811“847, 1991.
[48] A. Bowyer. Computing Dirichlet tesselations. Computer J., 24(2):162“166,
1981.
References: Journal Papers 413

[49] A.N. Brooks and T.J.R. Hughes. Streamline-upwind/Petrov“Galerkin
formulations for convection dominated ¬‚ows with particular emphasis on
the incompressible Navier“Stokes equations. Comput. Meth. Appl. Mech.
Engrg., 32:199“259, 1982.
[50] J.C. Cavendish. Automatic triangulation of arbitrary planar domains for
the ¬nite element method. Internat. J. Numer. Methods Engrg., 8(4):679“
696, 1974.
[51] W.M. Chan and P.G. Buning. Surface grid generation methods for
overset grids. Comput. Fluids, 24(5):509“522, 1995.
[52] P. Clement. Approximation by ¬nite element functions using local
´
regularization. RAIRO Anal. Num´r., 9(R-2):77“84, 1975.
e
[53] P.C. Hammer and A.H. Stroud. Numerical integration over simplexes
and cones. Math. Tables Aids Comput., 10:130“137, 1956.
[54] T.J.R. Hughes, L.P. Franca, and G.M. Hulbert. A new ¬nite element
formulation for computational ¬‚uid dynamics: VIII. The Galerkin/least-
squares method for advective-di¬usive equations. Comput. Meth. Appl.
Mech. Engrg., 73(2):173“189, 1989.
[55] P. Jamet. Estimation of the interpolation error for quadrilateral ¬nite
elements which can degenerate into triangles. SIAM J. Numer. Anal.,
14:925“930, 1977.
[56] H. Jin and R. Tanner. Generation of unstructured tetrahedral meshes by
advancing front technique. Internat. J. Numer. Methods Engrg., 36:1805“
1823, 1993.
[57] P. Knabner and G. Summ. The invertibility of the isoparametric mapping
for pyramidal and prismatic ¬nite elements. Numer. Math., 88(4):661“681,
2001.
[58] M. Kr´ˇek. On the maximum angle condition for linear tetrahedral
ˇ ±z
elements. SIAM J. Numer. Anal., 29:513“520, 1992.
[59] C.L. Lawson. Software for C 1 surface interpolation. In: J.R. Rice, editor,
Mathematical Software III, 161“194. Academic Press, New York, 1977.
[60] P. Moller and P. Hansbo. On advancing front mesh generation in three
¨
dimensions. Internat. J. Numer. Methods Engrg., 38:3551“3569, 1995.
[61] K.W. Morton, A. Priestley, and E. Suli. Stability of the Lagrange“
¨
Galerkin method with non-exact integration. RAIRO Mod´l. Math. Anal.
e
Num´r., 22(4):625“653, 1988.
e
[62] J. Peraire, M. Vahdati, K. Morgan, and O.C. Zienkiewicz. Adaptive
remeshing for compressible ¬‚ow computations. J. Comput. Phys., 72:449“
466, 1987.
[63] S.I. Repin. A posteriori error estimation for approximate solutions of varia-
tional problems by duality theory. In: H.G. Bock et al., editors, Proceedings
of ENUMATH 97, 524“531. World Scienti¬c Publ., Singapore, 1998.
±guez. Some remarks on Zienkiewicz“Zhu estimator. Numer. Meth.
[64] R. Rodr´
PDE, 10(5):625“635, 1994.
[65] W. Ruge and K. Stueben. Algebraische Mehrgittermethoden. In: S.F. Mc-
Cormick, editor, Multigrid Methods, 73“130. SIAM, Philadelphia, 1987.
414 References: Journal Papers

[66] R. Schneiders and R. Bunten. Automatic generation of hexahedral ¬nite
¨
element meshes. Computer Aided Geometric Design, 12:693“707, 1995.
[67] L.R. Scott and S. Zhang. Finite element interpolation of nonsmooth
functions satisfying boundary conditions. Math. Comp., 54(190):483“493,
1990.
[68] M.S. Shephard and M.K. Georges. Automatic three-dimensional mesh
generation by the ¬nite octree technique. Internat. J. Numer. Methods
Engrg., 32:709“749, 1991.
[69] G. Summ. Quantitative Interpolationsfehlerabsch¨tzungen f¨r Triangulierun-
a u
gen mit allgemeinen Tetraeder- und Hexaederelementen. Diplomarbeit,
Friedrich“Alexander“Universit¨t Erlangen“N¨rnberg, 1996.
a u
(http://www.am.uni-erlangen.de/am1/publications/dipl_phd_thesis)
[70] Ch. Tapp. Anisotrope Gitter ” Generierung und Verfeinerung. Disserta-
tion, Friedrich“Alexander“Universit¨t Erlangen“N¨rnberg, 1999.
a u
(http://www.am.uni-erlangen.de/am1/publications/dipl_phd_thesis)
[71] D.F. Watson. Computing the n-dimensional Delaunay tesselation with
application to Voronoi polytopes. Computer J., 24(2):167“172, 1981.
[72] M.A. Yerry and M.S. Shephard. Automatic three-dimensional mesh
generation by the modi¬ed-octree technique. Internat. J. Numer. Methods
Engrg., 20:1965“1990, 1984.
[73] J.Z. Zhu, O.C. Zienkiewicz, E. Hinton, and J. Wu. A new approach
to the development of automatic quadrilateral mesh generation. Internat.
J. Numer. Methods Engrg., 32:849“866, 1991.
[74] O.C. Zienkiewicz and J.Z. Zhu. The superconvergent patch recovery and
a posteriori error estimates. Parts I,II. Internat. J. Numer. Methods Engrg.,
33(7):1331“1364,1365“1382, 1992.
Index




adjoint, 247 modi¬ed, 237
arti¬cial di¬usion method, 373
adsorption, 12
advancing front method, 179, 180 assembling, 62
element-based, 66, 77
algorithm
node-based, 66
Arnoldi, 235
asymptotically optimal method, 199
CG, 223
multigrid iteration, 243
nested iteration, 253 Banach space, 404
Newton™s method, 357 Banach™s ¬xed-point theorem, 345
algorithmic error, 200 barycentric coordinates, 117
angle condition, 173 basis of eigenvalues
angle criterion, 184 orthogonal, 300
anisotropic, 8, 139 best approximation error, 70
ansatz space, 56, 67 BICGSTAB method, 238
nested, 240 bifurcation, 363
properties, 67 biharmonic equation, 111
approximation bilinear form, 400
superconvergent, 193 bounded, 403
approximation error estimate, 139, continuous, 93
144 de¬nite, 400
for quadrature rules, 160 positive, 400
one-dimensional, 137 positive de¬nite, 400
approximation property, 250 symmetric, 400
aquifer, 7 V -elliptic, 93
Armijo™s rule, 357 Vh -elliptic, 156
Arnoldi™s method, 235 block-Gauss“Seidel method, 211
algorithm, 235 block-Jacobi method, 211
416 Index

Bochner integral, 289 conjugate gradient, see CG
boundary, 393 connectivity condition, 173
boundary condition, 15 conormal, 16
Dirichlet, 15 conormal derivative, 98
¬‚ux, 15 conservative form, 14
homogeneous, 15 conservativity
inhomogeneous, 15 discrete global, 278
mixed, 15 consistency, 28
Neumann, 16 consistency error, 28, 156
boundary point, 393 constitutive relationship, 7
boundary value problem, 15 continuation method, 357, 363
adjoint, 145 continuity, 402
regular, 145 continuous problem, 21
weak solution, 107 approximation, 21
Bramble“Hilbert lemma, 135 contraction, 402
bulk density, 12 contraction number, 199
control domain, 257
Cantor™s function, 53 control volume, 257
capillary pressure, 10 convection
Cauchy sequence, 404 forced, 5, 12
Cauchy“Schwarz inequality, 400 natural, 5
CG method, 221 convection-di¬usion equation, 12
algorithm, 223 convection-dominated, 268
error reduction, 224 convective part, 12
with preconditioning, 228 convergence, 27
CGNE method, 235 global, 343
CGNR method, 234 linear, 343
characteristics, 388 local, 343
Chebyshev polynomial, 225 quadratic, 343
Cholesky decomposition, 84 superlinear, 343
incomplete, 231 with order of convergence p, 343
modi¬ed with respect to a norm, 401
incomplete, 232 correction, 201
chord method, 354 Crank-Nicolson method, 313
circle criterion, 184 cut-o¬ strategy, 187
closure, 393 Cuthill“McKee method, 89
coarse grid correction, 242, 243
coe¬cient, 16 Darcy velocity, 7
collocation method, 68 Darcy™s law, 8
collocation point, 68 decomposition
column sum criterion regular, 232
strict, 398 de¬niteness, 400
comparison principle, 40, 328 degree of freedom, 62, 115, 120
completion, 404 Delaunay triangulation, 178, 263
complexity, 88 dense, 96, 288, 404
component, 5 density, 7
condition number, 209, 397 derivative
spectral, 398 generalized, 53
conjugate, 219 material, 388
Index 417

weak, 53, 289 duality argument, 145
diagonal ¬eld, 362
diagonal scaling, 230 edge swap, 181
diagonal swap, 181 eigenfunction, 285
di¬erence quotient, 23 eigenvalue, 285, 291, 394
backward, 23 eigenvector, 291, 394
forward, 23 element, 57
symmetric, 23 isoparametric, 122, 169
di¬erential equation element sti¬ness matrix, 78
convection-dominated, 12, 368 element-node table, 74
degenerate, 9 ellipticity
elliptic, 17 uniform, 100
homogeneous, 16 embedding, 403
H k („¦) in C(„¦), 99
¯
hyperbolic, 17
inhomogeneous, 16 empty sphere criterion, 178
linear, 16 energy norm, 218
nonlinear, 16 energy norm estimates, 132
order, 16 energy scalar product, 217
parabolic, 17 equidistribution strategy, 187
quasilinear, 16 error, 201
semilinear, 16, 360 error equation, 68, 242
type of, 17 error estimate
di¬erential equation model a priori, 131, 185
instationary, 8 anisotropic, 144
linear, 8 error estimator
stationary, 8 a posteriori, 186
di¬usion, 5 asymptotically exact, 187
di¬usive mass ¬‚ux, 11 e¬cient, 186
di¬usive part, 12 reliable, 186
Dirichlet domain, 262 residual, 188
Dirichlet problem dual-weighted, 194
solvability, 104 robust, 187
discrete problem, 21 error level
discretization, 21 relative, 199
¬ve-point stencil, 24 Euler method
upwind, 372 explicit, 313
discretization approach, 55 implicit, 313
discretization parameter, 21 extensive quantity, 7
divergence, 20 extrapolation factor, 215
divergence form, 14 extrapolation method, 215
domain, 19, 394
C l , 407 face, 123
C k -, 96 family of triangulations
C ∞ -, 96 quasi-uniform, 165
Lipschitz, 96, 407 regular, 138
strongly, 407 Fick™s law, 11
domain of (absolute) stability, 317 ¬ll-in, 85
Donald diagram, 265 ¬nite di¬erence method, 17, 24
dual problem, 194 ¬nite element, 115, 116
418 Index

C 1 -, 115, 127 Lebesgue integrable, 407
a¬ne equivalent, 122 measurable, 393
Bogner“Fox“Schmit rectangle, 127 piecewise continuous, 48
C 0 -, 115 support, 394
cubic ansatz on simplex, 121 functional, 403
cubic Hermite ansatz on simplex, functional matrix, 348
126 functions
d-polynomial ansatz on cuboid, 123 equal almost everywhere, 393
equivalent, 122
Hermite, 126 Galerkin method, 56
Lagrange, 115, 126 stability, 69
linear, 57 unique solvability, 63
linear ansatz on simplex, 119 Galerkin product, 248
quadratic ansatz on simplex, 120 Galerkin/least squares“FEM, 377
simplicial, 117 Gauss™s divergence theorem, 14, 47,
¬nite element code 266
assembling, 176 Gauss“Seidel method, 204
kernel, 176 convergence, 204, 205
post-processor, 176 symmetric, 211
¬nite element discretization Gaussian elimination, 82
conforming, 114 generating function, 316
condition, 115 GMRES method, 235
nonconforming, 114 truncated, 238
¬nite element method, 18 with restart, 238
characterization, 67 gradient, 20
convergence rate, 131 gradient method, 218
maximum principle, 175 error reduction, 219
mortar, 180 gradient recovery, 192
¬nite volume method, 18 graph
cell-centred, 258 dual, 263
cell-vertex, 258 grid
node-centred, 258 chimera, 180
semidiscrete, 297 combined, 180
¬ve-point stencil, 24 hierarchically structured, 180
¬xed point, 342 logically structured, 177
¬xed-point iteration, 200, 344 overset, 180
consistent, 200 structured, 176
convergence theorem, 201 in the strict sense, 176
¬‚uid, 5 in the wider sense, 177
Fourier coe¬cient, 287 unstructured, 177
Fourier expansion, 287 grid adaptation, 187
Friedrichs“Keller triangulation, 64 grid coarsening, 183
frontal method, 87 grid function, 24
full discretization, 293 grid point, 21, 22
full upwind method, 373 close to the boundary, 24, 327
function far from the boundary, 24, 327
almost everywhere vanishing, 393 neighbour, 23
continuous, 407
essentially bounded, 405 harmonic, 31
Index 419

heat equation, 9 L0 -matrix, 399
Hermite element, 126 L-matrix, 399
Hessenberg matrix, 398 Lagrange element, 115, 126
Hilbert space, 404 Lagrange“Galerkin method, 387
homogenization, 6 Lagrangian coordinate, 387
hydraulic conductivity, 8 Lanczos biorthogonalization, 238
Langmuir model, 12
IC factorization, 231 Laplace equation, 9
ill-posedness, 16 Laplace operator, 20
ILU factorization, 231 lemma
existence, 232 Bramble“Hilbert, 135
ILU iteration, 231 C´a™s, 70
e
inequality ¬rst of Strang, 155
of Kantorovich, 218 lexicographic, 25
Friedrichs™, 105 linear convergence, 199
inverse, 376 Lipschitz constant, 402
of Poincar´, 71
e Lipschitz continuity, 402
in¬‚ow boundary, 108 load vector, 62
inhomogeneity, 15 LU factorization, 82
initial condition, 15 incomplete, 231
initial-boundary value problem, 15
inner product M-matrix, 41, 399
on H 1 („¦), 54 macroscale, 6
integral form, 14 mapping
integration by parts, 97 bounded, 402
interior, 394 continuous, 402
interpolation contractive, 402
local, 58 linear, 402
interpolation error estimate, 138, 144 Lipschitz continuous, 402
one-dimensional , 136 mass action law, 11
interpolation operator, 132 mass average mixture velocity, 7
interpolation problem mass lumping, 314, 365
local, 120 mass matrix, 163, 296, 298
isotropic, 8 mass source density, 7
iteration matrix
inner, 355 band, 84
outer, 355 bandwidth, 84
iteration matrix, 200 consistently ordered, 213
iterative method, 342 Hessenberg, 398
hull, 84
Jacobi matrix, 348 inverse monotone, 41
Jacobi™s method, 203 irreducible, 399
convergence, 204, 205 L0 -, 399
jump, 189 L-, 399
jump condition, 14 LU factorizable, 82
M-, 399
Krylov (sub)space, 222 monotone, 399
Krylov subspace of monotone type, 399
method, 223, 233 pattern, 231
420 Index

positive de¬nite, 394 ¬nite di¬erence, 24
pro¬le, 84 full upwind, 373
reducible, 399 Galerkin, 56
row bandwidth, 84 Gauss“Seidel, 204
row diagonally dominant GMRES, 235
strictly, 398 iterative, 342
weakly, 399 Jacobi™s, 203
sparse, 25, 82, 198 Krylov subspace, 223, 233
symmetric, 394 Lagrange“Galerkin, 387
triangular linear stationary, 200
lower, 398 mehrstellen, 30
upper, 398 moving front, 179
matrix norm multiblock, 180
compatible, 396 multigrid, 243
induced, 397 Newton™s, 349
mutually consistent, 396 of bisection, 182
submultiplicative, 396 stage number of, 182
subordinate, 397 one-step, 316
matrix polynomial, 394 one-step-theta, 312
matrix-dependent, 248 overlay, 177
max-min-angle property, 179 PCG, 228, 229
maximum angle condition, 144 r-, 181
maximum column sum, 396 relaxation, 207
maximum principle Richardson, 206
strong, 36, 39, 329 Ritz, 56
weak, 36, 39, 329 Rothe™s, 294
maximum row sum, 396 semi-iterative, 215
mechanical dispersion, 11 SOR, 210
mesh width, 21 SSOR, 211
method streamline upwind Petrov“
advancing front, 179, 180 Galerkin, 375
algebraic multigrid, 240 streamline-di¬usion, 377
Arnoldi™s , 235 method of conjugate directions, 219
arti¬cial di¬usion, 373 method of lines
asymptotically optimal, 199 horizontal, 294
BICGSTAB, 238 vertical, 293
block-Gauss“Seidel, 211 method of simultaneous
block-Jacobi, 211 displacements, 203
CG, 221 method of successive displacements,
classical Ritz“Galerkin, 67 204
collocation, 68 MIC decomposition, 232
consistent, 28 micro scale, 5
convergence, 27 minimum angle condition, 141
Crank-Nicolson, 313 minimum principle, 36
Cuthill“McKee, 89 mobility, 10
reverse, 90 molecular di¬usivity, 11
Euler explicit, 313 monotonicity
Euler implicit, 313 inverse, 41, 280
extrapolation, 215 monotonicity test, 357
Index 421

moving front method, 179 equivalent, 395
multi-index, 53, 394 numbering
length, 53, 394 columnwise, 25
order, 53, 394 rowwise, 25
multiblock method, 180
multigrid iteration, 243 octree technique, 177
algorithm, 243 one-step method, 316
multigrid method, 243 A-stable, 317
algebraic, 240 strongly, 319
L-stable, 319
neighbour, 38 nonexpansive, 316
nested iteration, 200, 252 stable, 320
algorithm, 253 one-step-theta method, 312
Neumann™s lemma, 398 operator, 403
Newton™s method, 349 operator norm, 403
algorithm, 357 order of consistency, 28
damped, 357 order of convergence, 27
inexact, 355 orthogonal, 401
simpli¬ed, 353 orthogonality of the error, 68
nodal basis, 61, 125 outer unit normal, 14, 97
nodal value, 58 out¬‚ow boundary, 108
node, 57, 115 overlay method, 177
adjacent, 127 overrelaxation, 209
degree, 89 overshooting, 371
neighbour, 63, 89, 211
norm, 400 parabolic boundary, 325
discrete L2 -, 27 parallelogram identity, 400
equivalence of, 401 Parseval™s identity, 292
Euclidean, 395 particle velocity, 7
Frobenius, 396 partition, 256
induced by a scalar product, 400 partition of unity, 407
p -, 395 PCG
matrix, 395 method, 228, 229
maximum, 395 P´clet number
e
maximum , 27 global, 12, 368
maximum column sum, 396 grid, 372
maximum row sum, 396 local, 269
of an operator, 403 permeability, 8
spectral, 397 perturbation lemma, 398
streamline-di¬usion, 378 phase, 5
stronger, 401 immiscible, 7
total, 396 phase average
vector, 395 extrinsic, 6
µ-weighted, 374 intrinsic, 6
normal derivative, 98 k-phase ¬‚ow, 5
normal equations, 234 (k + 1)-phase system, 5
normed space piezometric head, 8
complete, 404 point
norms boundary, 40
422 Index

close to the boundary, 40 relative permeability, 9
far from the boundary, 40 relaxation method, 207
Poisson equation, 8 relaxation parameter, 207
Dirichlet problem, 19 representative elementary volume, 6
polynomial residual, 188, 189, 201, 244
characteristic, 395 inner, 355
matrix, 394 restriction, 248
pore scale, 5 canonical, 247
pore space, 5 Richards equation, 10
porosity, 6 Richardson method, 206
porous medium, 5 optimal relaxation parameter, 208
porous medium equation, 9 Ritz method, 56
preconditioner, 227 Ritz projection, 304
preconditioning, 207, 227 Ritz“Galerkin method
from the left, 227 classical, 67
from the right, 227 root of equation, 342
preprocessor, 176 Rothe™s method, 294
pressure row sum criterion
global, 10 strict, 204, 398
principle of virtual work, 49 weak, 205, 399
projection 2:1-rule, 181
elliptic, 303, 304
prolongation, 246, 247 saturated, 10
canonical, 246 saturated-unsaturated ¬‚ow, 10
pyramidal function, 62 saturation, 7
saturation concentration, 12
quadrature points, 80 scalar product, 400
quadrature rule, 80, 151 energy, 217
accuracy, 152 Euclidean, 401
Gauss“(Legendre), 153 semi-iterative method, 215
integration points, 151 semidiscrete problem, 295
nodal, 152 semidiscretization, 293
trapezoidal rule, 66, 80, 153 seminorm, 400, 406
weights, 151 separation of variables, 285
quadtree technique, 177 set
closed, 393, 402
range, 343 connected, 394
reaction convex, 394
homogeneous, 13 open, 394
inhomogeneous, 11 set of measure zero, 393
surface, 11 shape function, 59
recovery operator, 193 cubic ansatz on simplex, 121
red mblack ordering, 212 d-polynomial ansatz on cube, 123
reduction strategy, 187 linear ansatz on simplex, 120
reference element, 58 quadratic ansatz on simplex, 121
standard simplicial, 117 simplex
re¬nement barycentre, 119
iterative, 231 degenerate, 117
red/green, 181 face, 117
Index 423

regular d-, 117 superposition principle, 16
sliver element, 179 surface coordinate, 119
smoothing system of equations
barycentric, 181 positive real, 233
Laplacian, 181
weighted barycentric, 181 test function, 47
smoothing property, 239, 250 theorem
smoothing step, 178, 242 of Aubin and Nitsche, 145
a posteriori, 243 of Kahan, 212
a priori, 243 of Lax“Milgram, 93
smoothness requirements, 20 of Ostrowski and Reich, 212
Sobolev space, 54, 94 of Poincar´, 71
e
solid matrix, 5 Trace, 96
solute concentration, 11 Thiessen polygon, 262
solution three-term recursion, 234
classical, 21 time level, 312
of an (initial-) boundary value time step, 312
problem, 17 tortuosity factor, 11
variational, 49 trace, 97
weak, 49, 290 transformation
uniqueness, 51 compatible, 134
solvent, 5 isoparametric, 168
SOR method, 210, 213 transformation formula, 137
convergence, 212 transmission condition, 34
optimal relaxation parameter, 213 triangle inequality, 400
sorbed concentration, 12 triangulation, 56, 114
source term, 14 anisotropic, 140
space conforming, 56, 125
normed, 400 element, 114
space-time cylinder, 15 properties, 114
bottom, 15 re¬nement, 76
lateral surface, 15 truncation error, 28
spectral norm, 397 two-grid iteration, 242
spectral radius, 395 algorithm, 242
spectrum, 395
split preconditioning, 228 underrelaxation, 209
SSOR method, 211 unsaturated, 10
stability function, 316 upscaling, 6
stability properties, 36 upwind discretization, 372
stable, 28 upwinding
static condensation, 128 exponential, 269
stationary point, 217 full, 269
step size, 21
sti¬ness matrix, 62, 296, 298 V-cycle, 244
element entries, 76 V -elliptic, 69
streamline upwind Petrov“Galerkin variation of constants, 286
method, 375 variational equation, 49
streamline-di¬usion method, 377 equivalence to minimization
streamline-di¬usion norm, 378 problem, 50
424 Index

solvability, 93 regular, 262
viscosity, 8
volume averaging, 6 W-cycle, 244
volumetric ¬‚uid velocity, 7 water pressure, 8
volumetric water content, 11 weight, 30, 80
Voronoi diagram, 262 well-posedness, 16
Voronoi polygon, 262 Wigner“Seitz cell, 262
Voronoi tesselation, 178
Z 2 “estimate, 192
Voronoi vertex, 262
degenerate, 262 zero of function f , 342

<<

. 16
( 16)