<<

. 10
( 19)



>>


N
an z n ’ f (z)
n=0


uniformly for |z| ¤ ρ. By Lemma 11.5.8, we know that there exists a function
g : „¦ ’ C such that N nan z n’1 ’ g(z) as N ’ ∞ for all z ∈ „¦. Using
n=1
Lemma 11.5.8 again, we have

N
nan z n’1 ’ g(z)
n=1


uniformly for |z| ¤ ρ. Since

N N
d
an z n nan z n’1 ,
=
dz n=0 n=1


Exercise 11.5.6 now tells us that f is di¬erentiable in {z : |z| < ρ} with

nan z n’1 .
f (z) =
n=1


Since |w| < ρ, we are done.

Remark: The proof above is more cunning than is at ¬rst apparent. Roughly
∞ n
speaking, it is often hard to prove directly that n=0 an z has a certain
property for all |z| < R, the radius of convergence, but relatively easy to show
that N an z n has a certain property for all |z| < R , whenever R < R.
n=0
However, if we choose R1 < R2 < . . . with RN ’ R we then know that
∞ n
n=0 an z will have the property for all


z∈ {z : |z| < RN } = {z : |z| < R},
r=1


and we are done. (We give two alternative proofs of Theorem 11.5.11 in
Exercise K.230 and Exercise K.231.)
Here are two useful corollaries.
293
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Exercise 11.5.12. Suppose that an ∈ C and ∞ an z n has radius of con-
n=0
vergence R > 0. Set „¦ = {z : |z| < R} and de¬ne f : „¦ ’ C by

an z n .
f (z) =
n=0


Show that f is in¬nitely di¬erentiable on „¦ and an = f (n) (0)/n!.

In other words, if f can be expanded in a power series about 0, then that
power series must be the Taylor series.

Exercise 11.5.13. (Uniqueness of power series.)Suppose that an ∈ C.
and ∞ an z n has radius of convergence R > 0. Set „¦ = {z : |z| < R} and
n=0
de¬ne f : „¦ ’ C by

an z n .
f (z) =
n=0

If there exists a δ with 0 < δ ¤ R such that f (z) = 0 for all |z| < δ, show,
by using the preceding exercise, or otherwise, that an = 0 for all n ≥ 0. [In
Exercise K.239 we give a stronger result with a more direct proof.]

By restricting our attention to the real axis, we can obtain versions of all
these results for real power series.

Lemma 11.5.14. Suppose that an ∈ R.
(i) Either ∞ an xn converges for all x ∈ R (in which case we say the
n=0
series has in¬nite radius of convergence) or there exists an R ≥ 0 such that
∞ n
n=0 an x converges for |x| < R and diverges for |x| > R (in which case we
say the series has radius of convergence R).
(ii) If 0 ¤ ρ < R then ∞ an xn converges uniformly on [’ρ, ρ].
n=0
∞ n
(iii) The sum f (x) = n=0 an x is di¬erentiable, term by term, on
(’R, R).
(iv) If R > 0, f is in¬nitely di¬erentiable and an = f (n) (0)/n!.
(v) If f vanishes on (’δ, δ) where 0 < δ ¤ R, then an = 0 for all n.

Part (iv) should be read in conjunction with Cauchy™s example of a well
behaved function with no power series expansion round 0 (Example 7.1.5).
The fact that we can di¬erentiate a power series term by term is important
for two reasons. The ¬rst is that there is a very beautiful and useful theory
of di¬erentiable functions from C to C (called ˜Complex Variable Theory™
or ˜The Theory of Analytic Functions™). In the initial development of the
294 A COMPANION TO ANALYSIS

theory it is not entirely clear that there are any interesting functions for the
theory to talk about. Power series provide such interesting functions.
The second reason is that it provides a rigorous justi¬cation for the use of
power series in the solution of di¬erential equations by methods of the type
employed on page 92.
∞ zn
Exercise 11.5.15. (i) Show that the sum has in¬nite radius of
n=0 n!
convergence.
(ii) Let us set

zn
e(z) =
n!
n=0

for all z ∈ C. Show that e is everywhere di¬erentiable and e (z) = e(z).
(iii) Use the mean value theorem of Exercise 11.5.5 to show that the
function f de¬ned by f (z) = e(a ’ z)e(z) is constant. Deduce that e(a ’
z)e(z) = e(a) for all z ∈ C and a ∈ C and conclude that

e(z)e(w) = e(z + w)

for all z, w ∈ C.
Here is another example.
Example 11.5.16. Let ± ∈ C. Solve the di¬erential equation

(1 + z)f (z) = ±f (z)

subject to f (0) = 1.
Solution. We look for a solution of the form

an z n
f (z) =
n=0

with radius of convergence R > 0. We di¬erentiate term by term within the
radius of convergence to get
∞ ∞
n’1
an z n ,
(1 + z) nan z =±
n=1 n=0

whence

((± ’ n)an ’ (n + 1)an+1 )z n = 0
n=0
295
Please send corrections however trivial to twk@dpmms.cam.ac.uk

for all |z| < R. By the uniqueness result of Exercise 11.5.13, this gives
(± ’ n)an ’ (n + 1)an+1 = 0,
so
±’n
an+1 = an ,
n+1
and, by induction,
n’1
1
(± ’ j),
an = A
n! j=0

for some constant A. Since f (0) = 1, we have A = 1 and
∞ n’1
1
(± ’ j)z n .
f (z) =
n!
n=0 j=0

If ± is a positive integer N , say, then aj = 0 for j ≥ N + 1 and we get
the unsurprising result
N
Nn
z = (1 + z)N .
f (z) =
n
n=0

From now on we assume that ± is not a positive integer. If z = 0,
|1 ’ ±n’1 |
|an+1 z n+1 | |± ’ n|
|z| = |z| ’ |z|
=
1 + n’1
|an z n | n+1
as n ’ ∞, so, by using the ratio test, ∞ an z n has radius of convergence
n=0
1.
We have shown that, if there is a power series solution, it must be
∞ n’1
1
(± ’ j)z n .
f (z) =
n!
n=0 j=0

Di¬erentiating term by term, we see that, indeed, the f given is a solution
valid for |z| < 1.
We have left open the possibility that the di¬erential equation of Exam-
ple 11.5.16 might have other solutions (such solutions would not have Taylor
expansions). The uniqueness of the solution follows from general results de-
veloped later in this book (see Section 12.2). However there is a simple proof
of uniqueness in this case.
296 A COMPANION TO ANALYSIS

Example 11.5.17. (i) Write D = {z : |z| < 1}. Let ± ∈ C. Suppose that
f± : D ’ C satis¬es

(1 + z)f± (z) = ±f± (z)

and f± (0) = 1, whilst g’± : D ’ C satis¬es

(1 + z)g’± (z) = ’±g’± (z)

and g’± (0) = 1. Use the mean value theorem of Exercise 11.5.5 to show that
f± (z)g’± (z) = 1 for all z ∈ D and deduce that the di¬erential equation

(1 + z)f (z) = ±f (z),

subject to f (0) = 1, has exactly one solution on D.
(ii) If ±, β ∈ C show, using the notation of part (i), that

f±+β (z) = f± (z)fβ (z)

for all z ∈ D. State and prove a similar result for f± (fβ (z)).

Restricting to the real axis we obtain the following version of our results.

Lemma 11.5.18. Let ± be a real number. Then the di¬erential equation

(1 + x)f (x) = ±f (x),

subject to f (0) = 1, has exactly one solution f : (’1, 1) ’ R which is given
by
∞ n’1
1
(± ’ j)xn .
f (x) =
n!
n=0 j=0


In Section 5.7 we developed the theory of the function r± (x) = x± for
x > 0 and ± real. One of these properties is that

xr± (x) = ±r± (x)

for all x > 0. We also have r± (0) = 1. Thus, if g± (x) = r± (1 + x), we have

(1 + x)g± (x) = ±g± (x)

for all x ∈ (’1, 1) and g± (0) = 1. Lemma 11.5.18 thus gives the following
well known binomial expansion.
297
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Lemma 11.5.19. If x ∈ (’1, 1), then
∞ n’1
1
±
(± ’ j)xn .
(1 + x) =
n!
n=0 j=0

Exercise 11.5.20. Use the same ideas to show that

xn
log(1 ’ x) = ’
n
n=1

for x ∈ (’1, 1).
Exercise 11.5.21. (i) If you are unfamiliar with the general binomial ex-
pansion described in Lemma 11.5.19, write out the ¬rst few terms explicitly
in the cases ± = ’1, ± = ’2, ± = ’3, ± = 1/2 and ± = ’1/2. Otherwise,
go directly to part (ii).
(ii) Show that
2 3
1 2x 13 2x 135 2x
+— +——
1+ + ...
1 + x2 1 + x2 1 + x2
2 24 246

converges to (1 + x2 )/(1 ’ x2 ) if |x| < 1 but converges to (1 + x2 )/(x2 ’ 1) if
|x| > 1. In [24], Hardy quotes this example to show the di¬culties that arise
if we believe that equalities which are true in one domain must be true in all
domains.
Exercise 11.5.22. Use Taylor™s theorem with remainder to obtain the ex-
pansions for (1 + x)± and log(1 ’ x).
[This is a slightly unfair question since the forms of the Taylor remainder
given in this book are not particularly well suited to the problem. If the reader
consults other texts she will ¬nd forms of the remainder which will work more
easily. She should then ask herself what the point of these forms of remainder
is, apart from obtaining Taylor series which are much more easily obtained
by ¬nding the power series solution of an appropriate di¬erential equation.]
Many textbooks on mathematical methods devote some time to the pro-
cess of solving di¬erential equations by power series. The results of this
section justify the process.
Slogan: The formal process of solving a di¬erential equation by power
series yields a correct result within the radius of convergence of the power
series produced.
The slogan becomes a theorem once we specify the type of di¬erential equa-
tion to be solved.
298 A COMPANION TO ANALYSIS

Note however, that, contrary to the implied promise of some textbooks
on mathematical methods, power series solutions are not always as useful as
they look.

Exercise 11.5.23. We know that ∞ (’1)n x2n /(2n)! converges everywhere
n=0
to cos x. Try and use this formula, together with a hand calculator to com-
pute cos 100. Good behaviour in the sense of the pure mathematician merely
means ˜good behaviour in the long run™ and the ˜long run™ may be too long for
any practical use.
Can you suggest and implement a sensible method4 to compute cos 100.

Exercise 11.5.24. By considering the relations that the coe¬cients must
satisfy show that there is no power series solution for the equation

x3 y (x) = ’2y(x)

with y(0) = 0 valid in some neighbourhood of 0.
Show, however, that the system does have a well behaved solution. [Hint:
Example 7.1.5.]
If the reader is prepared to work quite hard, Exercise K.243 gives a good
condition for the existence of a power series solution for certain typical dif-
ferential equations.

We end this section with a look in another direction.

Exercise 11.5.25. If z ∈ C and n is a positive integer we de¬ne n’z =
e’z log n . By using the Weierstrass M-test, or otherwise show that, if > 0,

n’z converges uniformly for z >1+ .
n=1


We call the limit ζ(z). Show further that ζ is di¬erentiable on the range
considered. Deduce that ζ is well de¬ned and di¬erentiable on the set {z ∈
C : z > 1}. (ζ is the famous Riemann zeta function.)


Fourier series ™
11.6
In this section we shall integrate complex-valued functions. The de¬nition
used is essentially that of De¬nition 8.5.1.
4
Pressing the cos button is sensible, but not very instructive.
299
Please send corrections however trivial to twk@dpmms.cam.ac.uk

De¬nition 11.6.1. If f : [a, b] ’ C is such that f : [a, b] ’ R and
f : [a, b] ’ R are Riemann integrable, then we say that f is Riemann
integrable and
b b b
f (x) dx = f (x) dx + i f (x) dx.
a a a

We leave it to the conscientious reader to check that the integral behaves
as it ought to behave.
If the reader has attended a course on mathematical methods she will
probably be familiar with the notion of the Fourier series of a periodic func-
tion.
De¬nition 11.6.2. If f : R ’ C is continuous and periodic with period 2π
(that is, f (t + 2π) = f (t) for all t) and m is an integer, we set
π
1
ˆ
f (m) = f (t) exp(’imt) dt.
2π ’π

Fourier claimed5 , in e¬ect, that

ˆ
f (t) = f (n) exp(int).
n=’∞

We now know that the statement is false in the sense that there exist con-
tinuous functions such that
N
ˆ
f (n) exp(int0 ) f (t0 )
n=’N

as N ’ ∞ for some t0 , but true in many other and deeper senses.
The unraveling of the various ways in which Fourier™s theorem holds took
a century and a half6 and was one of the major in¬‚uences on the rigorisation
ˆ
of analysis. In this section we shall merely provide a simple condition on f
which ensures that Fourier™s statement holds in its original form for a given
function f .
Our discussion hinges on the following theorem, which is very important
in its own right.
Theorem 11.6.3. (Uniqueness of the Fourier series.) If f : R ’ C is
ˆ
continuous and periodic with period 2π and f (n) = 0 for all n, then f = 0.
5
Others had had the idea before but Fourier ˜bet the farm on it™.
6
Supposing the process to have terminated.
300 A COMPANION TO ANALYSIS

To prove this result it turns out to be su¬cient to prove an apparently
weaker result. (See Exercises 11.6.6 and 11.6.7.)

Lemma 11.6.4. If f : R ’ R is continuous and periodic with period 2π and
ˆ
f (n) = 0 for all n, then f (0) = 0.

Proof. Suppose f (0) = 0. Without loss of generality we may suppose that
f (0) > 0, (otherwise, we can consider ’f ). By continuity, we can ¬nd an an
with 1 > > 0 such that |f (t) ’ f (0)| < f (0)/2 and so f (t) > f (0)/2 for all
|t| ¤ . Now choose · > 0 such that 2· + cos < 1 and set P (t) = · + cos t.
Since P (t) = (· + 1 eit + 1 e’it ), we have
2 2

k=N
N
bN k eikt
P (t) =
k=’N

for some bN k , and so
k=N k=N
π π
ˆ
f (t)P (t)N dt = f (t)eikt dt =
bN k bN k f (’k) = 0
’π ’π
k=’N k=’N

for all N .
Since f is continuous on [’π, π], it is bounded so there exists a K such
that |f (t)| ¤ K for all t ∈ [’π, π]. Since P (0) = · + 1 we can ¬nd an > 0
with > such that P (t) ≥ 1 + ·/2 for all |t| ¤ . Finally we observe
that |P (t)| ¤ 1 ’ · for ¤ |t| ¤ π. Putting all our information together, we
obtain

f (t)P (t)n ≥ f (0)(1 + ·/2)N /2 for all |t| ¤ ,
f (t)P (t)n ≥ 0 for all ¤ |t| ¤ ,
|f (t)P (t)n | ¤ K(1 ’ ·)N for all ¤ |t| ¤ π.

Thus
π
f (t)P (t)N dt
0=
’π

f (t)P (t)N dt + f (t)P (t)N dt + f (t)P (t)N dt
=
|t|¤ ¤|t|¤ ¤|t|¤π

≥ f (0)(1 + ·/2)N + 0 ’ 2πK(1 ’ ·)N ’ ∞.

The assumption that f (0) = 0 has led to a contradiction and the required
result follows by reductio ad absurdum.
301
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Exercise 11.6.5. Draw sketches illustrating the proof just given.

Exercise 11.6.6. (i) If g : R ’ C is continuous and periodic with period
2π and a ∈ R, we write ga (t) = g(t ’ a). Show that ga (n) = exp(ina)ˆ(n).
ˆ g
(ii) By translation, or otherwise, prove the following result. If f : R ’ R
ˆ
is continuous and periodic with period 2π and f (n) = 0 for all n, then f = 0.

Exercise 11.6.7. (i) If g : R ’ C is continuous and periodic with period
2π show that g — (n) = (ˆ(’n))— .
g
(ii) By considering f +f — and f ’f — , or otherwise, prove Theorem 11.6.3.

We can now state and prove our promised result on Fourier sums.

Theorem 11.6.8. If f : R ’ C is continuous and periodic with period 2π
ˆ
and ∞n=’∞ |f (n)| converges, then

N
ˆ
f (n) exp(int) ’ f (t)
n=’N

uniformly as N ’ ∞.
ˆ ˆ ˆ ˆ
Proof. Since |f (n) exp(int)+ f (’n) exp(’int)| ¤ |f (n)|+|f (’n)|, the Weier-
ˆ
strass M-test tells us that N n=’N f (n) exp(int) converges uniformly to g(t),
say. Since the uniform limit of continuous functions is continuous, g is con-
tinuous. We wish to show that g = f .
Observe that, since | exp(’imt)| = 1, we have
N N
ˆ ˆ
f (n) exp(i(n ’ m)t) = exp(’imt) f (n) exp(int) ’ exp(’imt)g(t)
n=’N n=’N

uniformly as N ’ ∞, so by Theorem 11.4.10, we have
N N
π π
1 1
ˆ ˆ
exp(i(n ’ m)t) dt = f (n) exp(i(n ’ m)t) dt
f (n)
2π 2π
’π ’π n=’N
n=’N
π
1
’ exp(’imt)g(t) dt = g (m).
ˆ
2π ’π

π
1
Now 2π ’π exp(irt) dt takes the value 1 if r = 0 and the value 0 otherwise,
ˆ ˆ
so we have shown that f (m) ’ g (m) as N ’ ∞. Thus f (m) = g (m) for all
ˆ ˆ
m and, by the uniqueness of Fourier series (Theorem 11.6.3), we have f = g
as required.
302 A COMPANION TO ANALYSIS

Exercise 11.6.9. If f : R ’ C periodic with period 2π and has continuous
second derivative, show, by integrating by parts twice, that
1
ˆ
f (n) = ’ 2 f (n)
n
for all n = 0. Deduce that
1
ˆ
|f (n)| ¤ sup |f (t)|
n2 t∈[’π,π]

for all n = 0, and that
N
ˆ
f (n) exp(int) ’ f (t)
n=’N

uniformly as N ’ ∞.

Exercise 11.6.10. Suppose f : R ’ R is a 2π periodic function with f (x) =
’f (’x) for all x and f (x) = x(π ’ x) for 0 ¤ x ¤ π. Show that

8 sin(2m + 1)x
f (x) =
(2m + 1)3
π m=0

for all x and, by choosing a particular x, show that

(’1)m π3
=.
(2m + 1)3 32
m=0

Exercise 11.6.11. Suppose f : R ’ R is a 2π periodic continuous function
ˆ
with ∞n=’∞ |nf (n)| convergent. Show that f is di¬erentiable and


ˆ
f (t) = inf (n) exp(int).
n=’∞
Chapter 12

Contraction mappings and
di¬erential equations

12.1 Banach™s contraction mapping theorem
This chapter and the next depend on the famous contraction mapping theo-
rem by which Banach transformed a ˜folk-technique™ into a theorem.

De¬nition 12.1.1. Let (X, d) be a metric space and T : X ’ X a mapping.
We say that w ∈ X is a ¬xed point of T if T w = w. We say that T is a
contraction mapping if there exists a positive number K < 1 with d(T x, T y) ¤
Kd(x, y) for all x, y ∈ X.

The next exercise is easy but helps suggest the proof of the theorem that
follows.

Exercise 12.1.2. Let (X, d) be a metric space and T : X ’ X a contrac-
tion mapping with a ¬xed point w. Suppose that x0 ∈ X and we de¬ne xn
inductively by xn+1 = T xn . Show that d(xn , w) ’ 0 as n ’ ∞.

Theorem 12.1.3. (The contraction mapping theorem.) A contraction
mapping on a non-empty complete metric space has a unique ¬xed point.

Proof. Suppose 1 > K > 0, (X, d) is a non-empty complete metric space and
T : X ’ X has the property d(T x, T y) ¤ Kd(x, y) for all x, y ∈ X.
We show ¬rst that, if T has a ¬xed point, it is unique. For suppose
T w = w and T z = z. We have

d(z, w) = d(T z, T w) ¤ Kd(z, w)

so, since K < 1, d(z, w) = 0 and z = w.

303
304 A COMPANION TO ANALYSIS

To prove that a ¬xed point exists, choose any x0 ∈ X and de¬ne xn
inductively by xn+1 = T xn . (The preceding exercise shows this is a good
idea.) By induction,
d(xn+1 , xn ) = d(T xn , T xn’1 ) ¤ Kd(xn , xn’1 ) ¤ · · · ¤ K n d(x1 , x0 )
and so, by the triangle inequality, we have, whenever m > n
m’1 m’1
Kn
j
d(xm , xn ) ¤ d(xj+1 , xj ) ¤ K d(x1 , x0 ) ¤ d(x1 , x0 ) ’ 0
1’K
j=n j=n

as n ’ ∞. Thus the sequence xn is Cauchy. Since (X, d) is complete, we
can ¬nd a w such that d(xn , w) ’ 0 as n ’ ∞.
We now show that w is indeed a ¬xed point. To do this, we observe that
d(T w, w) ¤ d(T w, xn+1 ) + d(xn+1 , w) = d(T w, T xn ) + d(xn+1 , w)
¤ Kd(w, xn ) + d(xn+1 , w) ’ 0 + 0 = 0
as n ’ ∞. Thus d(T w, w) = 0 and T w = w.
Wide though the conditions are, the reader should exercise caution before
attempting to widen them further.
Example 12.1.4. (i) If X = {’1, 1}, d is ordinary distance and the map
T : X ’ X is given by T x = ’x, then (X, d) is a complete metric space and
d(T x, T y) = d(x, y) for all x, y ∈ X, but T has no ¬xed point.
(ii) If X = [1, ∞), d is Euclidean distance and
T x = 1 + x + exp(’x),
then (X, d) is a complete metric space and d(T x, T y) < d(x, y) for all x, y ∈
X, but T has no ¬xed point.
(iii) If X = (0, ∞), d is ordinary distance and the map T : X ’ X is
given by T x = x/2, then (X, d) is a metric space and T is a contraction
mapping, but T has no ¬xed point.
Exercise 12.1.5. Verify the statements made in Example 12.1.4. In each
case, state the hypothesis in Theorem 12.1.3 which is not satis¬ed. In each
case, identify the point at which the proof of Theorem 12.1.3 fails.
The contraction mapping theorem is not the only important ¬xed point
theorem in mathematics. Exercise 1.6.5 gives another ¬xed point result which
can be generalised substantially. (For example, if
B = {x : x ¤ 1}
is the unit ball in Rn , then any continuous map of B into itself has a ¬xed
point.) However, the standard proofs involve algebraic topology and are
beyond the scope of this book.
305
Please send corrections however trivial to twk@dpmms.cam.ac.uk

12.2 Existence of solutions of di¬erential equa-
tions
We use the contraction mapping theorem to show that a wide class of di¬er-
ential equations actually have a solution.
We shall be looking at equations of the form

y = f (t, y).

Our ¬rst, simple but important, result is that this problem on di¬erential
equations can be turned into a problem on integral equations. (We shall
discuss why this may be expected to be useful after Exercise 12.2.2.)

Lemma 12.2.1. If f : R2 ’ R is continuous, t0 , y0 ∈ R and δ > 0, then
the following two statements are equivalent.
(A) The function y : (t0 ’ δ, t0 + δ) ’ R is di¬erentiable and satis¬es the
equation y (t) = f (t, y(t)) for all t ∈ (t0 ’ δ, t0 + δ) together with the boundary
condition y(t0 ) = y0 .
(B) The function y : (t0 ’ δ, t0 + δ) ’ R is continuous and satis¬es the
condition
t
y(t) = y0 + f (u, y(u)) du
t0

for all t ∈ (t0 ’ δ, t0 + δ).

Proof. We show that (A) implies (B). Suppose that y satis¬es condition (A).
Since y is di¬erentiable, it is continuous. Thus, since f is continuous, y is
continuous and one of the standard forms of the fundamental theorem of the
calculus (Theorem 8.3.11) gives
t
y(t) ’ y(t0 ) = f (u, y(u)) du
t0

so, since y(t0 ) = y0 ,
t
y(t) = y0 + f (u, y(u)) du
t0

for all t ∈ (t0 ’ δ, t0 + δ) as required.
The fact that (B) implies (A) is an immediate consequence of the funda-
mental theorem of the calculus in the form Theorem 8.3.6.
306 A COMPANION TO ANALYSIS

Exercise 12.2.2. If f : R2 ’ R is n times di¬erentiable then any solution
of y (t) = f (t, y(t)) is n + 1 times di¬erentiable.

Remark: Most mathematicians carry in their minds a list of operations which
are or are not likely to to be troublesome. Such a list will probably contain
the following entries.

less troublesome more troublesome
multiplication division
interpolation extrapolation
averaging di¬erencing
integration di¬erentiation
direct calculation ¬nding inverses

Integration produces a better behaved function, di¬erentiation may well pro-
duce a worse behaved function. The integral of an integrable function is an
integrable function, the derivative of a di¬erentiable function need not be dif-
ferentiable. The contraction mapping theorem concerns a map T : X ’ X,
so to apply it we must be sure that our operation T does not take us out
of our initial space. This is much easier to ensure if T involves integration
rather than di¬erentiation.
Theorem 12.2.3. Suppose f : R2 ’ R is continuous, t0 , y0 ∈ R and δ > 0.
Suppose further that there exists a K > 0 such that Kδ < 1 and

|f (t, u) ’ f (t, v)| ¤ K|u ’ v|

for all t ∈ [t0 ’ δ, t0 + δ] and all u and v. Then there exists a unique y :
[t0 ’ δ, t0 + δ] ’ R which is continuous and satis¬es the condition

t
y(t) = y0 + f (u, y(u)) du
t0


for all t ∈ [t0 ’ δ, t0 + δ].

Proof. We know that C([t0 ’ δ, t0 + δ]) the space of continuous functions on
[t0 ’ δ, t0 + δ] with the uniform norm ∞ is complete. Now consider the
map T : C([t0 ’ δ, t0 + δ]) ’ C([t0 ’ δ, t0 + δ]) given by

t
(T g)(t) = y0 + f (u, g(u)) du.
t0
307
Please send corrections however trivial to twk@dpmms.cam.ac.uk

If t0 + δ ≥ t ≥ t0 , we have
t
|(T g)(t) ’ (T h)(t)| = f (u, g(u)) ’ f (u, h(u)) du
t0
t
¤ |f (u, g(u)) ’ f (u, h(u))| du
t0
t
¤ K|g(u) ’ h(u)| du
t0
¤ (t ’ t0 )K g ’ h ¤ Kδ g ’ h ∞,


and a similar argument gives

|(T g)(t) ’ (T h)(t)| ¤ Kδ g ’ h ∞

for t0 ≥ t ≥ t0 ’ δ. Thus

Tg ’ Th ¤ Kδ g ’ h
∞ ∞

and T is a contraction mapping.
The contraction mapping theorem tells us that T has a unique ¬xed point,
that is there exists a unique y ∈ C([t0 ’ δ, t0 + δ]) such that
t
y(t) = y0 + f (u, y(u)) du
t0

for all t ∈ [t0 ’ δ, t0 + δ] and this is the required result.
Exercise 12.2.4. Restate Theorem 12.2.3 in terms of di¬erential equations.
Condition is called a Lipschitz condition.
Exercise 12.2.5. (i) Show that, if f : R2 ’ R has continuous partial deriva-
tive f,2 , then given any [a, b] and [c, d] we can ¬nd a K such that

|f (t, u) ’ f (t, v)| ¤ K|u ’ v|

for all t ∈ [a, b] and u, v ∈ [c, d].
(ii) If f : R2 ’ R is given by f (t, y) = |y| show that

|f (t, u) ’ f (t, v)| ¤ K|u ’ v|

for all t, u and v, but f does not have a partial derivative f,2 everywhere.
In the absence of a condition like di¬erential equations can have un-
expected properties.
308 A COMPANION TO ANALYSIS

Exercise 12.2.6. Consider the di¬erential equation

y = 3y 2/3

with y(0) = 0. Show that it has the solution

y(t) = (t ’ a)3 for t < a,
for a ¤ t ¤ b,
y(t) = 0
y(t) = (t ’ b)3 for b < t

whenever a ¤ b.
Exercise 12.2.6 is worth remembering whenever you are tempted to con-
vert the useful rule of thumb ˜¬rst order di¬erential equations involve one
choice of constant™ into a theorem.
Remark: It is easy to write down di¬erential equations with no solution. For
example, there is no real-valued solution to

(y )2 + y 2 + 1 = 0.

However, it can shown that the existence part of Theorem 12.2.3 continues
to hold, even if we drop condition , provided merely that f is continuous.
The reader may wish to ponder on the utility of an existence theorem in the
absence of a guarantee of uniqueness.
There is no di¬culty in extending the proof of Theorem 12.2.3 to higher
dimensions. In the exercise that follows the norm is the usual Euclidean
norm and y (t) = (y1 (t), y2 (t), . . . , yn (t)).
Exercise 12.2.7. (i) Suppose f : Rn+1 ’ Rn is continuous, t0 ∈ R, y0 ∈ Rn
and δ > 0. Suppose, further, that there exists a K > 0 such that Kδ < 1 and

f (t, u) ’ f (t, v) ¤ K u ’ v

for all t ∈ [t0 ’ δ, t0 + δ]. Then there exists a unique y : [t0 ’ δ, t0 + δ] ’ Rn
which is continuous and satis¬es the condition
t
y(t) = y0 + f (u, y(u)) du
t0

for all t ∈ [t0 ’ δ, t0 + δ].
(ii) With the notation and conditions of (i), y is the unique solution of

y (t) = f (t, y(t)), y(t0 ) = y0 .

on (t0 ’ δ, t0 + δ).
309
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Exercise 12.2.7 is particularly useful because it enables us to deal with
higher order di¬erential equations. To see how the proof below works, observe
that the second order di¬erential equation

y +y =0

can be written as two ¬rst order di¬erential equations

y = w, w = ’y

or, vectorially, as a single ¬rst order di¬erential equation

(y, w) = (w, ’y).

Lemma 12.2.8. Suppose g : Rn+1 ’ R is continuous, t0 ∈ R, yj ∈ Rn for
0 ¤ j ¤ n ’ 1 and δ > 0. Suppose, further, that there exists a K > 0 such
that (K + 1)δ < 1 and

|g(t, u) ’ g(t, v)| ¤ K u ’ v

for all t ∈ [t0 ’ δ, t0 + δ]. Then there exist a unique, n times di¬erentiable,
function y : (t0 ’ δ, t0 + δ) ’ R with

y (n) (t) = g(t, y(t), y (t), . . . , y (n’1) (t)) and y (j) (t0 ) = yj for 0 ¤ j ¤ n ’ 1.

Proof. This uses the trick described above. We de¬ne

f (t, u1 , u2 , . . . , un ) = (u1 , u2 , . . . , un , g(t, u1 , u2 , . . . , un’1 )).

The di¬erential equation

y (t) = f (t, y(t))

is equivalent to the system of equations

[1 ¤ j ¤ n]
yj (t) = fj (t, y(t))

which for our choice of f becomes

[1 ¤ j ¤ n ’ 1]
yj (t) = yj (t)
yn (t) = g(t, y(t), y (t), . . . , y (n’1) (t)).

Taking y(t) = y1 (t), this gives us yj (t) = y (j’1) (t) and

y (n) (t) = g(t, y(t), y (t), . . . , y (n’1) (t)),
310 A COMPANION TO ANALYSIS

which is precisely the di¬erential equation we wish to solve. Our boundary
conditions
y (j) (t0 ) = yj for 0 ¤ j ¤ n ’ 1
now take the form y(t0 ) = y0 with
y0 = (y0 , y1 , . . . , yn’1 ),
and we have reduced our problem to that studied in Exercise 12.2.7.
To prove existence and uniqueness we need only verify that f satis¬es the
appropriate Lipschitz condition. But
f (t, u) ’ f (t, v)
= (u1 ’ v1 , u2 ’ v2 , . . . , un’1 ’ vn’1 , g(t, u1 , u2 , . . . , un ) ’ g(t, v1 , v2 , . . . , vn ))
¤ u ’ v + |g(t, u1 , u2 , . . . , un ) ’ g(t, v1 , v2 , . . . , vn )| ¤ (K + 1) u ’ v ,
so we are done.


Local to global ™
12.3
We proved Theorem 12.2.3 for functions f with
|f (t, u) ’ f (t, v)| ¤ K|u ’ v|
for all t ∈ [t0 ’ δ, t0 + δ] and all u and v. However, this condition is more
restrictive than is necessary.
Theorem 12.3.1. Suppose · > 0 and f : [t0 ’ ·, t0 + ·] — [y0 ’ ·, y0 + ·] ’ R
is a continuous function satisfying the condition
|f (t, u) ’ f (t, v)| ¤ K|u ’ v|
whenever t ∈ [t0 ’ ·, t0 + ·] and u, v ∈ [y0 ’ ·, y0 + ·]. Then we can ¬nd
a δ > 0 with · ≥ δ such that there exists a unique di¬erentiable function
y : (t0 ’ δ, t0 + δ) ’ R which satis¬es the equation y (t) = f (t, y(t)) for all
t ∈ (t0 ’ δ, t0 + δ) together with the boundary condition y(t0 ) = y0 .
Proof. This is an easy consequence of Theorem 12.2.3. De¬ne a function
˜
f : R2 ’ R as follows.
˜ if |t ’ t0 | ¤ ·, |y ’ y0 | ¤ ·,
f (t, y) = f (t, y)
˜ if t > t0 + ·, |y ’ y0 | ¤ ·,
f (t, y) = f (t0 + ·, y)
˜
f (t, y) = f (t0 ’ ·, y) if t < t0 ’ ·, |y ’ y0 | ¤ ·,
˜ ˜
f (t, y) = f (t, y0 + ·) if y > y0 + ·,
˜ ˜
f (t, y) = f (t, y0 ’ ·) if y < y0 ’ ·.
311
Please send corrections however trivial to twk@dpmms.cam.ac.uk

˜
We observe that f is continuous and
˜ ˜
|f (t, u) ’ f (t, v)| ¤ K|u ’ v|
for all t, u and v.
˜ ˜
If we choose δ > 0 with K δ < 1, then Theorem 12.2.3 tells us that there
˜ ˜
exists a unique di¬erentiable function y : (t0 ’ δ, t0 + δ) ’ R which satis¬es
˜
˜˜ ˜ ˜
the equation y (t) = f (t, y (t)) for all t ∈ (t0 ’ δ, t0 + δ) together with the
˜
boundary condition y (t0 ) = y0 . Since y is continuous, we can ¬nd a δ > 0
˜ ˜
˜
with · ≥ δ, δ ≥ δ and
|˜(t) ’ y0 | < ·
y
for all t ∈ (t0 ’ δ, t0 + δ). If we set y = y|(t0 ’δ,t0 +δ) (the restriction of y to
(t0 ’ δ, t0 + δ)), then
(t, y(t)) ∈ [t0 ’ ·, t0 + ·] — [y0 ’ ·, y0 + ·]
and so
˜
f (t, y(t)) = f (t, y(t))
for all t ∈ (t0 ’ δ, t0 + δ), so y is the unique solution of
y (t) = f (t, y(t))
as required.
˜
Exercise 12.3.2. (i) Describe f in words.
˜
(ii) It is, I think, clear that f is continuous and
˜ ˜
|f (t, u) ’ f (t, v)| ¤ K|u ’ v|
for all t, u and v. Carry out some of the detailed checking which would be
required if someone demanded a complete proof.
Theorem 12.3.1 tells us, that under very wide conditions, the di¬erential
equation has a local solution through each (t0 , y0 ). Does it have a global
solution, that is, if f : R2 ’ R is well behaved can we ¬nd a solution for the
equation y (t) = f (t, y(t)) which is de¬ned for all t ∈ R? Our ¬rst result in
this direction is positive.
Theorem 12.3.3. Suppose f : R2 ’ R is a continuous function satisfying
the following condition. There exists a K : [0, ∞) ’ [0, ∞) such that
|f (t, u) ’ f (t, v)| ¤ K(R)|u ’ v|
whenever |t| ¤ R. Then given any (t0 , y0 ) ∈ R2 there exists a unique y : R ’
R which is is di¬erentiable and satis¬es the equation y (t) = f (t, y(t)) for all
x ∈ R together with the boundary condition y(t0 ) = y0
312 A COMPANION TO ANALYSIS

Note that it makes no di¬erence how fast K(R) increases.
Proof. This proof is worth studying since it is of a type which occurs in several
places in more advanced work. We refer to the equation y (t) = f (t, y(t)) for
all t ∈ R together with the boundary condition y(t0 ) = y0 as ˜the system™.
Our result will follow if we can show that the system has a unique solution
on [t0 , ∞) and on (’∞, t0 ]. The proof is essentially the same for the two cases,
so we show that the system has a unique solution on [t0 , ∞). Observe that,
if we can show that the system has a unique solution on [t0 , T ) for all T > t0 ,
we shall have shown that the system has a unique solution on [t0 , ∞). (Write
yT : [t0 , T ) ’ R for the solution on [t0 , T ). If S ≥ T then yS (t) = yT (t) for all
t ∈ [t0 , T ) by uniqueness. Thus we can de¬ne y : [t0 , ∞) ’ R by y(t) = yT (t)
for all t ∈ [t0 , T ). By construction y is a solution of the system on [t0 , ∞).
If w : [t0 , ∞) ’ R is a solution of the system on [t0 , ∞) then, by uniqueness
on [t0 , T ), w(t) = yT (t) = y(t) for all t0 ¤ t ¤ T and all T > t0 . Since T
was arbitrary, w(t) = y(t) for all t ∈ [t0 , ∞).) We can thus concentrate our
e¬orts on showing that the system has a unique solution on [t0 , T ) for all
T > t0 .
Existence Let
E = {„ > t0 : the system has a solution on [t0 , „ )}.
By Theorem 12.3.1, E is non-empty. If E is bounded it has a supremum T0 ,
say. Choose R0 > |T0 | + 2 and set K0 = K(R0 ). By hypothesis,
|f (t, u) ’ f (t, v)| ¤ K0 |u ’ v|
whenever |t ’ T0 | < 2. Choose δ0 > 0 such that 1 > δ0 , T0 ’ t0 > 2δ0 and
K0 δ0 < 1. Since T0 is the supremum of E we can ¬nd T1 ∈ E such that
T1 > T0 ’ δ0 /3. Let y : [t0 , T1 ), ’ R be a solution of the system and let T2 =
T1 ’δ0 /3. By Theorem 12.3.1, there exists a unique w : (T2 ’δ0 , T2 +δ0 ) ’ R
such that
w (t) = f (t, w(t)), w(T2 ) = y(T2 ).
The uniqueness of w means that w(t) = y(t) for all t where both y and w
are de¬ned (that is, on (T2 ’ δ0 , T1 )). Setting
y (t) = y(t)
˜ for t < T1 ,
y (t) = w(t)
˜ for t < T2 + δ0 ,
we see that y : [t0 , T2 +δ0 ) ’ R is a solution of the system. Since T2 +δ0 > T1 ,
˜
we have a contradiction. Thus, by reductio ad absurdum, E is unbounded
and the system has a solution on [t0 , T ) for all T > t0 .
313
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Uniqueness We need to show that if T > t0 and y and w are solutions of the
system on [t0 , T ) then y(t) = w(t) for all t ∈ [t0 , T ). The proof is similar to,
but simpler than, the existence proof just given. Let

E = {T > „ ≥ t0 : y(t) = w(t) for all t ∈ [t0 , „ ]}.

Since t0 ∈ E, we know that E is non-empty. By de¬nition, E is bounded and
so has a supremum T0 . If T0 = T we are done. If not, T0 < T . By continuity,
y(T0 ) = w(T0 ). As before, choose R0 > |T0 | + 2 and set K0 = K(R0 ). By
hypothesis,

|f (t, u) ’ f (t, v)| ¤ K0 |u ’ v|

whenever |t ’ T0 | < 2. Choose δ0 > 0 such that 1 > δ0 , T0 ’ t0 > 2δ0 ,
„ ’ T0 > 2δ0 , and K0 δ0 < 1. By Theorem 12.3.1, there exists a unique
z : (T0 ’ δ0 , T0 + δ0 ) ’ R such that

z (t) = f (t, z(t)), z(T0 ) = y(T0 ).

By uniqueness y(t) = z(t) = w(t) for all t ∈ (T0 ’ δ0 , T0 + δ0 ). It follows that
y(t) = w(t) for all t ∈ [t0 , T0 + δ0 ) and so, by continuity, for all t ∈ [t0 , T0 + δ].
Thus T0 +δ0 ∈ E contradicting the de¬nition of T0 . The desired result follows
by contradiction.

Exercise 12.3.4. Suppose · > 0 and f : (a, b) ’ R is a continuous function
such that, given any t1 ∈ (a, b) and any y1 ∈ R we can ¬nd an ·(t1 , y1 ) > 0
and a K(t1 , y1 ) such that

|f (t, u) ’ f (t, v)| ¤ K(t1 , y1 )|u ’ v|

whenever

t ∈ [t1 ’ ·(t1 , y1 ), t1 + ·(t1 , y1 )] and u, v ∈ [y1 ’ ·(t1 , y1 ), y1 + ·(t1 , y1 )].

Show that, if y, w : (a, b) ’ R are di¬erentiable functions such that

y (t) = f (t, y(t)), w (t) = f (t, w(t)) for all t ∈ (a, b)

and y(t0 ) = w(t0 ) for some t0 ∈ (a, b), then y(t) = w(t) for all t ∈ (a, b).

Exercise 12.3.5. Use Example 1.1.3 to show that, in the absence of the fun-
damental axiom, we cannot expect even very well behaved di¬erential equa-
tions to have unique solutions.
314 A COMPANION TO ANALYSIS

Looking at Theorem 12.3.3, we may ask if we can replace the condition

|f (t, u) ’ f (t, v)| ¤ K(R)|u ’ v| whenever |t| ¤ R

by the condition

|f (t, u) ’ f (t, v)| ¤ K(R)|u ’ v| whenever |t|, |u|, |v| ¤ R.

Unless the reader is very alert, the answer comes as a surprise followed almost
at once by surprise that the answer came as a surprise.

Example 12.3.6. Let f (t, y) = 1 + y 2 . Then

|f (t, u) ’ f (t, v)| ¤ 2R|u ’ v|

whenever |t|, |u|, |v| ¤ R. However, given t0 , y0 ∈ R, there does not exist a
di¬erentiable function y : R ’ R such that y (t) = f (t, y(t)) for all t ∈ R.

Proof. Observe ¬rst that

|f (t, u) ’ f (t, v)| = |u2 ’ v 2 | = |u + v||u ’ v| ¤ (|u| + |v|)|u ’ v| ¤ 2R|u ’ v|

whenever |t|, |u|, |v| ¤ R.
We can solve the equation

y = 1 + y2

formally by considering
dy
= dt
1 + y2
and obtaining

tan’1 y = t + a,

so that y(t) = tan(t+a) for some constant a. We choose ± ∈ [t0 ’π/2, t0 +π/2]
so that y0 = tan(t0 ’ ±) satis¬es the initial condition and thus obtain

y(t) = tan(t ’ ±)

for ± ’ π/2 < t < ± + π/2. We check that we have a solution by direct
di¬erentiation. Exercise 12.3.4 tells us that this is the only solution. Since
tan(t ’ ±) ’ ∞ as t ’ ± + π/2 through values of t < ± + π/2, the required
result follows.
315
Please send corrections however trivial to twk@dpmms.cam.ac.uk

(An alternative proof is outlined in Exercise K.265.)
Exercise 12.3.7. (i) Sketch, on the same diagram, various solutions of y (t) =
1 + y(t)2 with di¬erent initial conditions.
(ii) Identify the point in our proof of Theorem 12.3.3 where the argument
fails for the function f (t, y) = 1 + y 2 .
We may think of local solutions as a lot of jigsaw pieces. Just looking
at the pieces does not tell us whether they ¬t together to form a complete
jigsaw.
Here is another example which brings together ideas from various parts
of the book. Although the result is extremely important, I suggest that the
reader does not bother too much with the details of the proof.
Lemma 12.3.8. If z0 ∈ C \ {0} and w0 ∈ C, then the di¬erential equation
1
f (z) =
z
has a solution with f (z0 ) = w0 in the set
B(z0 , |z0 |) = {z ∈ C : |z ’ z0 | < |z0 |}.
However, the same di¬erential equation
1
f (z) =
z
has no solution valid in C \ {0}.
Proof. Our ¬rst steps re¬‚ect the knowledge gained in results like Exam-
j+1 j
ple 11.5.16 and Exercise 11.5.20. The power series ∞ (’1)j z has radius
j=1
of convergence 1. We de¬ne h : B(0, 1) ’ C by

(’1)j+1 z j
h(z) = .
j
j=1

Since we can di¬erentiate term by term within the radius of convergence, we
have
∞ ∞
1
j+1 j’1
(’1)j z j =
h (z) = (’1) z =
1+z
j=1 j=0

’1
for all |z| < 1. Thus, if we set f (z) = w0 +h(1+(z’z0 )z0 ) for z ∈ B(z0 , |z0 |),
the chain rule gives
1 1 1
f (z) = =
’1
z0 1 + (z ’ z0 )z0 z
316 A COMPANION TO ANALYSIS

as desired. Simple calculation gives f (z0 ) = w0 .
The second part of the proof is, as one might expect, closely linked to
Example 5.6.13. Suppose, if possible, that there exists an f : C \ {0} ’ C
satisfying the di¬erential equation
1
f (z) = .
z
By replacing f by f ’f (1), we may suppose that f (1) = 0. De¬ne A : R ’ C
by A(t) = ’if (eit ). Writing A(t) = a(t) + ib(t) with a(t) and b(t) real, we
see that A (t) = a (t) + ib (t) exists with value

’if (ei(t+δt) ) + if (eit )
A (t) = lim
δt
δt’0
i(t+δt)
) ’ f (eit ) eiδt ’ 1
f (e
(’ieit )
= lim i(t+δt) ’ eit
e δt
δt’0
it
e
= f (eit )i(’ieit ) = it = 1.
e
Thus A(t) = t + A(0) = t. In particular,

0 = A(0) = ’if (1) = ’if (e2πi ) = A(2π) = 2π,

which is absurd. Thus no function of the type desired can exist.
Exercise 12.3.9. The proof above is one of the kind where the principal
characters wear masks. Go through the above proof using locutions like ˜the
thing that ought to behave like log z if log z existed and behaved as we think
it ought.™
Exercise 12.3.10. Write

Bj = {z ∈ C : |z ’ eπij/3 | < 1}.
3
Bj ’ C with
Show that there exists a function f1 : j=0

1
f1 (z) = , f1 (1) = 0
z
6
Bj ’ C with
and a function f2 : j=3

1
f2 (z) = , f2 (1) = 0.
z
Find f1 ’ f2 on B0 and on B3 .
317
Please send corrections however trivial to twk@dpmms.cam.ac.uk

We have a lot of beautifully ¬tting jigsaw pieces but when we put too
many together they overlap instead forming a complete picture. Much of
complex variable theory can be considered as an extended meditation on
Lemma 12.3.8.
If the reader is prepared to allow a certain amount of hand waving, here
is a another example of this kind of problem. Consider the circle T obtained
by ˜rolling up the real real line like a carpet™ so that the point θ is identi¬ed
with the point θ + 2π. If we seek a solution of the equation

f (θ) + »2 f (θ) = 0

where » is real and positive then we can always obtain ˜local solutions™ f (θ) =
sin(»θ + θ0 ) valid on any small part of the circle we choose, but only if » is an
integer can we extend it to the whole circle. When we start doing analysis
on spheres, cylinders, tori and more complicated objects, the problem of
whether we can combine ˜local solutions™ to form consistent ˜global solutions™
becomes more and more central.
The next exercise is straightforward and worthwhile but long.
Exercise 12.3.11. (i) State and prove the appropriate generalisation of
Theorem 12.3.3 to deal with a vectorial di¬erential equation

y (t) = f (t, y(t)).

(ii) Use (i) to obtain the following generalisation of Lemma 12.2.8. Sup-
pose g : Rn+1 ’ R is a continuous function satisfying the following condition.
There exists a K : [0, ∞) ’ R such that

|g(t, u) ’ g(t, v)| ¤ K(R) u ’ v

whenever |t| ¤ R. Then, given any (t0 , y0 , y1 , . . . , yn’1 ) ∈ Rn+1 , there exists
a unique n times di¬erentiable function y : R ’ R with

y (n) (t) = g(t, y(t), y (t), . . . , y n’1 (t)) and y (j) (t0 ) = yj for 0 ¤ j ¤ n ’ 1.

Exercise 12.3.12. In this book we have given various approaches to the ex-
ponential and trigonometric functions. Using the material of this section, we
can give a particularly neat treatment which avoids the use of in¬nite sums.
(i) Explain why there exists a unique di¬erentiable function e : R ’ R
such that

e (x) = e(x) for all x ∈ R, e(0) = 0.

By di¬erentiating the function f de¬ned by f (x) = e(a ’ x)e(x), show that
e(a ’ x)e(x) = e(a) for all x, a ∈ R and deduce that e(x + y) = e(x)e(y)
318 A COMPANION TO ANALYSIS

for all x, y ∈ R. List all the properties of the exponential function that you
consider important and prove them.
(ii) Explain why there exist unique di¬erentiable functions s, c : R ’ R
such that

s (x) = c(x), c (x) = ’s(x) for all x ∈ R, s(0) = 0, c(0) = 1.

By di¬erentiating the function f de¬ned by f (x) = s(a’x)c(x)+c(a’x)s(x),
obtain an important addition formula for trigonometric functions. Obtain
at least one other such addition formula in a similar manner. List all the
properties of sin and cos that you consider important and prove them.
(iii) Write down a di¬erential equation for T (x) = tan x of the form

T (x) = g(T (x)).

Explain why, without using properties of tan, we know there exists a function
T with T (0) = 0 satisfying this di¬erential equation on some interval (’a, a)
with a > 0. State and prove, using a method similar to those used in parts (i)
and (ii), a formula for T (x + y) when x, y, x + y ∈ (’a, a).



Green™s function solutions ™
12.4
In this section we discuss how to solve the di¬erential equation for real-valued
functions on [0, 1] given as

y (t) + a(t)y (t) + b(t)y(t) = f (t)

subject to the conditions y(0) = y(1) = 0 by using the Green™s function. We
assume that a and b are continuous. Notice that we are dealing with a linear
di¬erential equation so that, if y1 and y2 are solutions and »1 + »2 = 1, then
»1 y1 + »2 y2 is also a solution. Notice also that the boundary conditions are
di¬erent from those we have dealt with so far. Instead of specifying y and y
at one point, we specify y at two points.
Exercise 12.4.1. (i) Check the statement about the solutions.
(ii) Explain why there is no loss in generality in considering the interval
[0, 1] rather than the interval [u, v].
Most of this section will be taken up with an informal discussion leading
to a solution (given in Theorem 12.4.6) that can be veri¬ed in a couple of
lines. However, the informal heuristics can be generalised to deal with many
interesting problems and the veri¬cation cannot.
319
Please send corrections however trivial to twk@dpmms.cam.ac.uk

When a ball hits a bat its velocity changes very rapidly because the bat
exerts a very large force for a very short time. However, the position of
the ball hardly changes at all during the short time the bat and ball are in
contact. We try to model this by considering the system
y (t) + a(t)y (t) + b(t)y(t) = h· (t), y(0) = y(1) = 0
where
h· (t) ≥ 0 for all t,
for all t ∈ [s ’ ·, s + ·],
h· (t) = 0 /
s+·
h· (t) dt = 1
s’·

and · > 0, [s ’ ·, s + ·] ⊆ [0, 1]. We have
y (t) + a(t)y (t) + b(t)y(t) = 0 for t ¤ s ’ ·, y(0) = 0
y (t) + a(t)y (t) + b(t)y(t) = 0 for t ≥ s + ·, y(1) = 0
and
s+·
y (s + ·) ’ y (s ’ ·) = y (t) dt
s’·
s+·
h· (t) ’ a(t)y (t) ’ b(t)y(t) dt
=
s’·
s+· s+·
=1’ a(t)y (t) dt ’ b(t)y(t) dt.
s’· s’·

What happens as we make · small? Although y changes very rapidly we
would expect its value to remain bounded (the velocity of the ball changes
s+·
but remains bounded) so we would expect s’· a(t)y (t) dt to become very
small. We expect the value of y to change very little, so we certainly expect
s+·
b(t)y(t) dt to become very small.
s’·
If we now allow · to tend to zero, we are led to look at the system of
equations
y (t) + a(t)y (t) + b(t)y(t) = 0 for t < s, y(0) = 0
y (t) + a(t)y (t) + b(t)y(t) = 0 for t > s, y(1) = 0
y(s+) = y(s’) = y(s), y (s+) ’ y (s’) = 1.
Here, as usual, y(s+) = limt’s, t>s y(s) and y(s’) = limt’s, t<s y(s). The
statement y(s+) = y(s’) = y(s) thus means that y is continuous at s. We
write the system more brie¬‚y as
y (t) + a(t)y (t) + b(t)y(t) = δs (t), y(0) = y(1) = 0,
320 A COMPANION TO ANALYSIS

where δc may be considered as ˜a unit impulse at c™ or ˜the idealisation of
h· (t) for small ·™ or a ˜delta function at s™ or a ˜Dirac point mass at s™ (this
links up with Exercise 9.4.11 on Riemann-Stieljes integration).
By the previous section, we know that there exists a unique, twice di¬er-
entiable, y1 : [0, 1] ’ R such that

y1 (t) + a(t)y1 (t) + b(t)y1 (t) = 0, y1 (0) = 0, y1 (0) = 1,

and a unique, twice di¬erentiable, y2 : [0, 1] ’ R such that

y2 (t) + a(t)y2 (t) + b(t)y2 (t) = 0, y2 (1) = 0, y2 (1) = 1.

We make the following

key assumption: y1 (1) = 0

(so that y2 cannot be a scalar multiple of y1 ).
If y is a solution of , the uniqueness results of the previous section
tell us that

y(t) = Ay1 (t) for 0 ¤ t < s, y(t) = By2 (t) for s < t ¤ 1

for appropriate constants A and B. Since y(s+) = y(s’) = y(s), we can ¬nd
a constant C such that A = Cy2 (s), B = Cy1 (s) and so

y(t) = Cy1 (t)y2 (s) for 0 ¤ t < s, y(t) = Cy2 (t)y1 (s) for s < t ¤ 1.

The condition y (s+) ’ y (s’) = 1 gives us

C y1 (s)y2 (s) ’ y1 (s)y2 (s) = 1

and so, setting W (s) = y1 (s)y2 (s) ’ y1 (s)y2 (s), and assuming, without proof
for the moment, that W (s) = 0, we have

y(t) = y1 (t)y2 (s)W (s)’1 for 0 ¤ t ¤ s, y(t) = y2 (t)y1 (s)W (s)’1 for s ¤ t ¤ 1.

Although we shall continue with our informal argument afterwards, we
take time out to establish that W is never zero.

De¬nition 12.4.2. If u1 and u2 are two solutions of

y (t) + a(t)y (t) + b(t)y(t) = 0,

we de¬ne the associated Wronskian W by W (t) = u1 (t)u2 (t) ’ u1 (t)u2 (t).
321
Please send corrections however trivial to twk@dpmms.cam.ac.uk

Lemma 12.4.3. (i) If W is as in the preceding de¬nition, then W (t) =
’a(t)W (t) for all t ∈ [0, 1].
(ii) If W is as in the preceding de¬nition, then
t
W (t) = Ae’ a(x) dx
0



for all t ∈ [0, 1] and some constant A.
(iii) If y1 and y2 are as in the discussion above and the key assumption
holds, then
W (s) = y1 (s)y2 (s) ’ y1 (s)y2 (s) = 0
for all s ∈ [0, 1].
Proof. (i) Just observe that
W (t) = u1 (t)u2 (t) + u1 (t)u2 (t) ’ u1 (t)u2 (t) ’ u1 (t)u2 (t) = u1 (t)u2 (t) ’ u1 (t)u2 (t)
= u2 (t)(’a(t)u1 (t) ’ b(t)u1 (t)) ’ u1 (t)(’a(t)u2 (t) ’ b(t)u2 (t)) = ’a(t)W (t).
(ii) We solve the di¬erential equation formally, obtaining
W (t)
= ’a(t),
W (t)
whence
t
log W (t) = ’ a(x) dx + log A
0
t
and so W (t) = Ae’ 0 a(x) dx for some constant A.
We verify directly that this is indeed a solution. The uniqueness results
of the previous section (note that W (0) = A), show that it is the unique
solution.
(iii) Observe that W (1) = y1 (1)y2 (1) ’ y1 (1)y2 (1) = ’y1 (1) = 0 by the
key assumption. Since W does not vanish at 1, part (ii) shows that it
vanishes nowhere.
Exercise 12.4.4. Prove part (ii) of Lemma 12.4.3 by considering the deriva-
t
tive of the function f given by f (t) = W (t) exp( 0 a(x) dx).
A more general view of the Wronskian is given by Exercise K.272.
We write G(s, t) = y(s), where y is the solution we obtained to , that
is, we set
G(s, t) = y1 (t)y2 (s)W (s)’1 for 0 ¤ t ¤ s,
G(s, t) = y2 (t)y1 (s)W (s)’1 for s ¤ t ¤ 1.
322 A COMPANION TO ANALYSIS

The function G : [0, 1]2 ’ R is called a Green™s function.
We return to our informal argument. Since G(s, t) is the solution of

y (t) + a(t)y (t) + b(t)y(t) = δs (t), y(0) = y(1) = 0,
m
it follows, by linearity, that y(t) = »j G(sj , t) is the solution of
j=1

m
y (t) + a(t)y (t) + b(t)y(t) = »j δsj (t), y(0) = y(1) = 0.
j=1

N
In particular, if f : [0, 1] ’ R, then yN (t) = N ’1 f (j/N )G(j/N, t) is
j=1
the solution of
N
y (t) + a(t)y (t) + b(t)y(t) = N ’1 f (j/N )δj/N (t), y(0) = y(1) = 0.
j=1


Now imagine yourself pushing a large object. You could either give a con-
tinuous push, applying a force of magnitude f (t) or give a sequence of sharp
taps N ’1 N f (j/N )δj/N (t). As you make the interval between the taps
j=1
ever smaller (reducing the magnitude of each individual tap proportionally)
the two ways of pushing the object become more and more alike and
N
N ’1 f (j/N )δj/N ’ f in some way which we cannot precisely de¬ne.
j=1


It is therefore plausible that, as N ’ ∞,

yN ’ y— in some way to be precisely determined later,

where y— is the solution of

y— (t) + a(t)y— (t) + b(t)y— (t) = f (t), y— (0) = y— (1) = 0.

It also seems very likely that
N 1
’1
f (j/N )G(j/N, t) ’
yN (t) = N f (s)G(s, t) dt
0
j=1

<<

. 10
( 19)



>>