when both methods succeed, the quasi-Newton method, which requires a single

function evaluation for each outer iteration when full steps can be taken, may

well be most e¬cient.

8.4.1. Inverse tangent function. Since we have easy access to analytic

derivatives in this example, we can use the two-point parabolic line search.

We compare the two-point parabolic line search with the constant reduction

search (σ0 = σ1 = .5) for the arctan function. In Fig. 8.2 we plot the iteration

history corresponding to the parabolic line search with the solid line and that

for the constant reduction with the dashed line. We use x0 = 10 as the initial

iterate with „r = „a = 10’8 . The parabolic line search required 7 outer iterates

and 21 function evaluations in contrast with the constant reduction searches 11

outer iterates and 33 function evaluations. In the ¬rst outer iteration, both line

searches take three steplength reductions. However, the parabolic line search

takes only one reduction in the next three iterations and none thereafter. The

constant reduction line search took three reductions in the ¬rst two outer

iterations and two each in following two.

8.4.2. Convection-di¬usion equation. We consider a much more di¬-

cult problem. We take (6.21) from § 6.4.2,

’∇2 u + Cu(ux + uy ) = f

with the same right hand side f , initial iterate u0 = 0, 31 — 31 mesh and

homogeneous Dirichlet boundary conditions on the unit square (0, 1) — (0, 1)

that we considered before. Here, however, we set C = 100. This makes a

globalization strategy critical (Try it without one!). We set „r = „a = h2 /10.

This is a tighter tolerance that in § 6.4.2 and, because of the large value of C

and resulting ill-conditioning, is needed to obtain an accurate solution.

In Fig. 8.3 we show the progress of the iteration for the unpreconditioned

equation. For this problem we plot the progress of the iteration using · = .25

with the solid line and using the sequence given by (6.20) with the dashed

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

147

GLOBAL CONVERGENCE

0

10

-2

10

Relative Nonlinear Residual

-4

10

-6

10

-8

10

-10

10

-12

10

0 5 10 15 20 25 30 35

Function Evaluations

Fig. 8.2. Newton“Armijo for the arctan function.

line. We set the parameters in (6.20) to γ = .9 and ·max = .25. This problem

is very poorly conditioned and much tighter control on the inner iteration is

needed than for the other problems. The two approaches to selection of the

forcing term performed very similarly. The iterations required 75.3 (constant

·) and 75.4 (varying ·) million ¬‚oating point operations, 759 (constant ·) and

744 (varying ·) function evaluations, and 25 (constant ·) and 22 (varying ·)

outer iterations.

0

10

-1

10

Relative Nonlinear Residual

-2

10

-3

10

-4

10

-5

10

0 100 200 300 400 500 600 700 800

Function Evaluations

Fig. 8.3. Newton“Armijo for the PDE, C = 100.

We consider the preconditioned problem in Fig. 8.4. In the computation we

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

148 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS

preconditioned (6.21) with the fast Poisson solver fish2d. In Fig. 8.4 we show

the progress of the iteration for the preconditioned equation. For this problem

we plot the progress of the iteration using · = .25 with the solid line and using

the sequence given by (6.20) with the dashed line. We set the parameters in

(6.20) to γ = .9 and ·max = .99.

The constant · iteration terminated after 79 function evaluations, 9 outer

iterations, and roughly 13.5 million ¬‚oating-point operations. The line search

in this case reduced the step length three times on the ¬rst iteration and twice

on the second and third.

The iteration in which {·n } is given by (6.20) terminated after 70 function

evaluations, 9 outer iterations, and 12.3 million ¬‚oating-point operations. The

line search in the non-constant · case reduced the step length three times on

the ¬rst iteration and once on the second. The most e¬cient iteration, with

the forcing term given by (6.20), required at most 16 inner iterations while the

constant · approach needed at most 10.

0

10

-1

10

Relative Nonlinear Residual

-2

10

-3

10

-4

10

-5

10

0 10 20 30 40 50 60 70 80

Function Evaluations

Fig. 8.4. Newton“Armijo for the PDE, C = 100.

8.4.3. Broyden“Armijo. We found that the Broyden“Armijo line search

failed on all the unpreconditioned convection di¬usion equation examples. In

these cases the Jacobian is not well approximated by a low rank perturbation

of the identity [112] and the ability of the inexact Newton iteration to ¬nd a

good search direction was the important factor.

We begin by revisiting the example in § 6.4.2 with C = 20. As reported

in § 6.4.2, the Newton-GMRES iteration always took full steps, terminating

in 4 outer iterations, 16 function evaluations, and 2.6 million ¬‚oating-point

operations. When compared to the Broyden™s method results in § 6.4.2,

when increases in the residual were allowed, the Broyden“Armijo costs re¬‚ect

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

149

GLOBAL CONVERGENCE

an improvement in e¬ciency. The Broyden iteration in § 6.4.2 took 12

iterations and 2.8 million ¬‚oating-point operations while the Broyden“Armijo

took 9 iterations and required 2 million ¬‚oating-point operations. We set

„a = „r = h2 , ·max = .5, and γ = .9 as we did in § 6.4.2.

In our implementation of brsola the three-point parabolic line search was

used. The steplength was reduced on the second iteration (twice) and the third

(once). Figure 8.5 compares the Broyden“Armijo iteration (solid line) to the

GMRES iteration (dashed line).

0

10

Relative Nonlinear Residual

-1

10

-2

10

-3

10

0 2 4 6 8 10 12 14 16

Function Evaluations

Fig. 8.5. Newton-GMRES and Broyden“Armijo for the PDE, C = 20.

For the more di¬cult problem with C = 100, the performance of the

two methods is more similar. In Fig. 8.6 we compare our implementation

of Newton“GMRES“Armijo (dashed line) to Broyden“Armijo (solid line). We

set „a = „r = h2 /10, ·max = .99, and γ = .9. The Broyden“Armijo iteration

required 34 nonlinear iterations, roughly 16 million ¬‚oating-point operations,

and 85 function evaluations. In terms of storage, the Broyden“Armijo iteration

required storage for 37 vectors while the Newton-GMRES iteration needed at

most 16 inner iterations, needing to store 21 vectors, and therefore was much

more e¬cient in terms of storage.

Restarting the Broyden iteration every 19 iterations would equalize the

storage requirements with Newton“GMRES“Armijo. Broyden™s method suf-

fers under these conditions as one can see from the comparison of Broyden“

Armijo (solid line) and Newton“GMRES“Armijo (dashed line) in Fig. 8.7. The

restarted Broyden“Armijo iteration took 19.5 million ¬‚oating-point operations,

42 outer iterates, and 123 function evaluations.

From these examples we see that Broyden“Armijo can be very e¬cient pro-

vided only a small number of iterations are needed. The storage requirements

increase as the nonlinear iteration progresses and this fact puts Broyden™s

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

150 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS

0

10

-1

10

Relative Nonlinear Residual

-2

10

-3

10

-4

10

-5

10

0 10 20 30 40 50 60 70 80 90

Function Evaluations

Fig. 8.6. Newton-GMRES and Broyden“Armijo for the PDE, C = 100.

0

10

-1

10

Relative Nonlinear Residual

-2

10

-3

10

-4

10

-5

10

0 20 40 60 80 100 120 140

Function Evaluations

Fig. 8.7. Newton-GMRES and restarted Broyden“Armijo for the PDE, C = 100.

method at a disadvantage to Newton-GMRES when the number of nonlinear

iterations is large.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

151

GLOBAL CONVERGENCE

8.5. Exercises on global convergence

8.5.1. How does the method proposed in [10] di¬er from the one implemented in

nsola? What could be advantages and disadvantages of that approach?

8.5.2. In what ways are the results in [25] and [70] more general than those in

this section? What ideas do these papers share with the analysis in this

section and with [3] and [88]?

8.5.3. Implement the Newton“Armijo method for single equations. Apply your

code to the following functions with x0 = 10. Explain your results.

1. f (x) = arctan(x) (i.e., Duplicate the results in § 8.1.)

2. f (x) = arctan(x2 )

3. f (x) = .9 + arctan(x)

4. f (x) = x(1 + sin(1/x))

5. f (x) = ex

6. f (x) = 2 + sin(x)

7. f (x) = 1 + x2

8.5.4. A numerical analyst buys a German sports car for $50,000. He puts

$10,000 down and takes a 7 year installment loan to pay the balance. If

the monthly payments are $713.40, what is the interest rate? Assume

monthly compounding.

8.5.5. Verify (8.8) and (8.9).

8.5.6. Show that if F is Lipschitz continuous and the iteration {xn } produced

by Algorithm nsola converges to x— with F (x— ) = 0, then F (x— ) is

singular.

8.5.7. Use nsola to duplicate the results in § 8.4.2. Vary the convection

coe¬cient in the convection-di¬usion equation and the mesh size and

report the results.

8.5.8. Experiment with other linear solvers such as GMRES(m), Bi-CGSTAB,

and TFQMR. This is interesting in the locally convergent case as well.

You might use the MATLAB code nsola to do this.

8.5.9. Modify nsol, the hybrid Newton algorithm from Chapter 5, to use the

Armijo rule. Try to do it in such a way that the chord direction is used

whenever possible.

8.5.10. Modify brsola to test the Broyden direction for the descent property

and use a two-point parabolic line search. What could you do if the

Broyden direction is not a descent direction? Apply your code to the

examples in § 8.4.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

152 ITERATIVE METHODS FOR LINEAR AND NONLINEAR EQUATIONS

8.5.11. Modify nsola to use a cubic polynomial and constant reduction line

searches instead of the quadratic polynomial line search. Compare the

results with the examples in § 8.4.

8.5.12. Does the secant method for equations in one variable always give a

direction that satis¬es (8.2) with ·n bounded away from 1? If not, when

does it? How would you implement a secant-Armijo method in such a

way that the convergence theorem 8.2.1 is applicable?

8.5.13. Under what conditions will the iteration given by nsola converge to a

root x— that is independent of the initial iterate?

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

Bibliography

[1] E. L. Allgower and K. Georg, Simplicial and continuation methods for

approximating ¬xed points and solutions to systems of equations, SIAM Rev., 22

(1980), pp. 28“85.

[2] , Numerical Continuation Methods : An Introduction, Springer-Verlag, New

York, 1990.

[3] L. Armijo, Minimization of functions having Lipschitz-continuous ¬rst partial

derivatives, Paci¬c J. Math., 16 (1966), pp. 1“3.

[4] W. E. Arnoldi, The principle of minimized iterations in the solution of the

matrix eigenvalue problem, Quart. Appl. Math., 9 (1951), pp. 17“29.

[5] S. F. Ashby, T. A. Manteuffel, and J. S. Otto, A comparison of

adaptive Chebyshev and least squares polynomial preconditioning for Hermetian

positive de¬nite linear systems, SIAM J. Sci. Statist. Comput., 13 (1992), pp. 1“

29.

[6] K. E. Atkinson, Iterative variants of the Nystr¨m method for the numerical

o

solution of integral equations, Numer. Math., 22 (1973), pp. 17“31.

[7] , An Introduction to Numerical Analysis, 2nd. ed., John Wiley, New York,

1989.

[8] O. Axelsson, Iterative Solution Methods, Cambridge University Press, Cam-

bridge, 1994.

[9] S. Banach, Sur les op´rations dans les ensembles abstraits et leur applications

e

aux ´quations int´grales, Fund. Math, 3 (1922), pp. 133“181.

e e

[10] R. E. Bank and D. J. Rose, Global approximate Newton methods, Numer.

Math., 37 (1981), pp. 279“295.

[11] M. S. Barlett, An inverse matrix adjustment arising in discriminant analysis,

Ann. Math. Statist., 22 (1951), pp. 107“111.

[12] R. Barrett, M. Berry, T. F. Chan, J. Demmel, J. Donato, J. Don-

garra, V. Eijkhout, R. Pozo, C. Romine, and H. van der Vorst, Templates

for the Solution of Linear Systems: Building Blocks for Iterative Methods, Society

for Industrial and Applied Mathematics, Philadelphia, PA, 1993.

´ ´

[13] N. Bicanic and K. H. Johnson, Who was ˜-Raphson™?, Internat. J. Numer.

Methods. Engrg., 14 (1979), pp. 148“152.

[14] P. B. Bosma and W. A. DeRooij, E¬cient methods to calculate Chan-

drasekhar™s H-functions, Astron. Astrophys., 126 (1983), p. 283.

¨

[15] H. Brakhage, Uber die numerische Behandlung von Integralgleichungen nach

der Quadraturformelmethode, Numer. Math., 2 (1960), pp. 183“196.

[16] K. E. Brenan, S. L. Campbell, and L. R. Petzold, Numerical Solution

153

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

154 BIBLIOGRAPHY

of Initial Value Problems in Di¬erential-Algebraic Equations, no. 14 in Classics in

Applied Mathematics, SIAM, Philadelphia, 1996.

[17] R. Brent, Algorithms for Minimization Without Deriviatives, Prentice-Hall,

Englewood Cli¬s, NJ, 1973.

[18] , Some e¬cient algorithms for solving systems of nonlinear equations, SIAM

J. Numer. Anal., 10 (1973), pp. 327“344.

[19] W. Briggs, A Multigrid Tutorial, Society for Industrial and Applied Mathemat-

ics, Philadelphia, PA, 1987.

[20] P. N. Brown, A local convergence theory for combined inexact“Newton/ ¬nite“

di¬erence projection methods, SIAM J. Numer. Anal., 24 (1987), pp. 407“434.

[21] P. N. Brown, G. D. Byrne, and A. C. Hindmarsh, VODE: A variable

coe¬cient ode solver, SIAM J. Sci. Statist. Comput., 10 (1989), pp. 1038“1051.

[22] P. N. Brown and A. C. Hindmarsh, Reduced storage matrix methods in sti¬

ODE systems, J. Appl. Math. Comput., 31 (1989), pp. 40“91.

[23] P. N. Brown, A. C. Hindmarsh, and L. R. Petzold, Using Krylov methods

in the solution of large-scale di¬erential-algebraic systems, SIAM J. Sci. Comput., 15

(1994), pp. 1467“1488.

[24] P. N. Brown and Y. Saad, Hybrid Krylov methods for nonlinear systems of

equations, SIAM J. Sci. Statist. Comput., 11 (1990), pp. 450“481.

[25] , Convergence theory of nonlinear Newton-Krylov algorithms, SIAM J.

Optim., 4 (1994), pp. 297“330.

[26] C. G. Broyden, A class of methods for solving nonlinear simultaneous equa-

tions, Math. Comput., 19 (1965), pp. 577“593.

[27] , The convergence of an algorithm for solving sparse nonlinear systems,

Math. Comput., 25 (1971), pp. 285“294.

´

[28] C. G. Broyden, J. E. Dennis, and J. J. More, On the local and superlinear

convergence of quasi-Newton methods, J. Inst. Maths. Applic., 12 (1973), pp. 223“

246.

[29] W. Burmeister, Zur Konvergenz einiger verfahren der konjugierten Richtun-

gen, in Proceedings of Internationaler Kongreß uber Anwendung der Mathematik in

¨

dem Ingenieurwissenschaften, Weimar, 1975.

[30] I. W. Busbridge, The Mathematics of Radiative Transfer, Cambridge Tracts,

No. 50, Cambridge Univ. Press, Cambridge, 1960.

[31] R. H. Byrd, J. Nocedal, and R. B. Schnabel, Representation of quasi-

Newton matrices and their use in limited memory methods, Math. Programming, 63

(1994), pp. 129“156.

[32] X.-C. Cai, W. D. Gropp, D. E. Keyes, and M. D. Tidriri, Newton-Krylov-

Schwarz methods in CFD, in Proceedings of the International Workshop on the

Navier-Stokes Equations, R. Rannacher, ed., Notes in Numerical Fluid Mechanics,

Braunschweig, 1994, Vieweg Verlag.

[33] S. L. Campbell, I. C. F. Ipsen, C. T. Kelley, and C. D. Meyer, GMRES

and the minimal polynomial, Tech. Report CRSC-TR94-10, North Carolina State

University, Center for Research in Scienti¬c Computation, July 1994. BIT, to appear.

[34] S. L. Campbell, I. C. F. Ipsen, C. T. Kelley, C. D. Meyer, and Z. Q.

Xue, Convergence estimates for solution of integral equations with GMRES, Tech.

Report CRSC-TR95-13, North Carolina State University, Center for Research in

Scienti¬c Computation, March 1995. Journal of Integral Equations and Applications,

to appear.

[35] R. Cavanaugh, Di¬erence Equations and Iterative Processes, PhD thesis,

University of Maryland, 1970.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

155

BIBLIOGRAPHY

[36] F. Chaitin-Chatelin, Is nonnormality a serious di¬culty ?, Tech. Report

TR/PA/94/18, CERFACS, December 1994.

[37] T. Chan, E. Gallopoulos, V. Simoncini, T. Szeto, and C. Tong, A quasi-

minimal residual variant of the Bi-CGSTAB algorithm for nonsymmetric systems,

SIAM J. Sci. Comput., 15 (1994), p. 338.

´

[38] T. Chan, R. Glowinski, J. Periaux, and O. Widlund, eds., Domain

Decomposition Methods,Proceedings of the Second International Symposium on

Domain Decomposition Methods, Los Angeles, CA, January 14“16, 1988; Society

for Industrial and Applied Mathematics, Philadelphia, PA, 1989.

[39] , eds., Domain Decomposition Methods, Proceedings of the Third Interna-

tional Symposium on Domain Decomposition Methods, Houston, TX, 1989; Society

for Industrial and Applied Mathematics, Philadelphia, PA, 1990.

[40] , eds., Domain Decomposition Methods, Proceedings of the Fourth Interna-

tional Symposium on Domain Decomposition Methods, Moscow, USSR, 1990; Soci-

ety for Industrial and Applied Mathematics, Philadelphia, PA, 1991.,

[41] S. Chandrasekhar, Radiative Transfer, Dover, New York, 1960.

´

[42] T. F. Coleman and J. J. More, Estimation of sparse Jacobian matrices and

graph coloring problems, SIAM J. Numer. Anal., 20 (1983), pp. 187“209.

[43] T. F. Coleman and C. VanLoan, Handbook for Matrix Computations,

Frontiers in Appl. Math., No. 4, Society for Industrial and Applied Mathematics,

Philadelphia, PA, 1988.

[44] P. Concus, G. H. Golub, and G. Meurant, Block preconditioning for the

conjugate gradient method, SIAM J. Sci. Statist. Comput., 6 (1985), pp. 220“252.

[45] P. Concus, G. H. Golub, and D. P. O™Leary, A generalized conjugate

gradient method for the numerical solution of elliptic partial di¬erential equations,

in Sparse Matrix Computations, J. R. Bunch and D. J. Rose, eds., Academic Press,

1976, pp. 309“332.

[46] E. J. Craig, The N-step iteration procedures, J. Math. Phys., 34 (1955), pp. 64“

73.

[47] A. R. Curtis, M. J. D. Powell, and J. K. Reid, On the estimation of sparse

Jacobian matrices, J. Inst. Math. Appl., 13 (1974), pp. 117“119.

[48] J. W. Daniel, The conjugate gradient method for linear and nonlinear operator

equations, SIAM J. Numer. Anal., 4 (1967), pp. 10“26.

[49] D. W. Decker, H. B. Keller, and C. T. Kelley, Convergence rates for

Newton™s method at singular points, SIAM J. Numer. Anal., 20 (1983), pp. 296“314.

[50] D. W. Decker and C. T. Kelley, Newton™s method at singular points I, SIAM

J. Numer. Anal., 17 (1980), pp. 66“70.

[51] , Convergence acceleration for Newton™s method at singular points, SIAM J.

Numer. Anal., 19 (1982), pp. 219“229.

[52] , Sublinear convergence of the chord method at singular points, Numer.

Math., 42 (1983), pp. 147“154.

[53] , Broyden™s method for a class of problems having singular Jacobian at the

root, SIAM J. Numer. Anal., 22 (1985), pp. 566“574.

[54] T. J. Dekker, Finding a zero by means of successive linear interpolation, in

Constructive Aspects of the Fundamental Theorem of Algebra, P. Henrici, ed., 1969,

pp. 37“48.

[55] R. Dembo, S. Eisenstat, and T. Steihaug, Inexact Newton methods, SIAM

J. Numer. Anal., 19 (1982), pp. 400“408.

[56] R. Dembo and T. Steihaug, Truncated Newton algorithms for large-scale

optimization, Math. Programming, 26 (1983), pp. 190“212.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

156 BIBLIOGRAPHY

[57] J. E. Dennis, On the Kantorovich hypothesis for Newton™s method, SIAM J.

Numer. Anal., 6 (1969), pp. 493“507.

[58] , Toward a uni¬ed convergence theory for Newton-like methods, in Nonlinear

Functional Analysis and Applications, L. B. Rall, ed., Academic Press, New York,

1971, pp. 425“472.

[59] J. E. Dennis, J. M. Martinez, and X. Zhang, Triangular decomposition

methods for solving reducible nonlinear systems of equations, SIAM J. Optim., 4

(1994), pp. 358“382.

´

[60] J. E. Dennis and J. J. More, A characterization of superlinear convergence

and its application to quasi-Newton methods, Math. Comput., 28 (1974), pp. 549“560.

[61] , Quasi-Newton methods, methods, motivation and theory, SIAM Rev., 19

(1977), pp. 46“89.

[62] J. E. Dennis and R. B. Schnabel, Least change secant updates for quasi-

Newton methods, SIAM Rev., 21 (1979), pp. 443“459.

[63] , Numerical Methods for Unconstrained Optimization and Nonlinear Equa-

tions, no. 16 in Classics in Applied Mathematics, SIAM, Philadelphia, 1996.

[64] J. E. Dennis and H. F. Walker, Convergence theorems for least change secant

update methods, SIAM J. Numer. Anal., 18 (1981), pp. 949“987.

[65] , Inaccuracy in quasi-Newton methods: Local improvement theorems, in

Mathematical Programming Study 22: Mathematical programming at Oberwolfach

II, North“Holland, Amsterdam, 1984, pp. 70“85.

[66] , Least-change sparse secant updates with inaccurate secant conditions,

SIAM J. Numer. Anal., 22 (1985), pp. 760“778.

[67] P. Deuflhard, R. W. Freund, and A. Walter, Fast secant methods for

the iterative solution of large nonsymmetric linear systems, Impact of Computing in

Science and Engineering, 2 (1990), pp. 244“276.

[68] W. J. Duncan, Some devices for the solution of large sets of simultaneous

linear equations (with an appendix on the reciprocation of partitioned matrices), The

London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science,

Seventh Series, 35 (1944), pp. 660“670.

[69] S. C. Eisenstat and H. F. Walker, Choosing the forcing terms in an inexact

Newton method, SIAM J. Sci. Comput., 17 (1996), pp. 16“32.

[70] , Globally convergent inexact Newton methods, SIAM J. Optim., 4 (1994),

pp. 393“422.

[71] H. C. Elman, Iterative Methods for Large, Sparse, Nonsymmetric Systems of

Linear Equations, PhD thesis, Yale University, New Haven, CT, 1982.

[72] H. C. Elman, Y. Saad, and P. E. Saylor, A hybrid Chebyshev-Krylov

subspace algorithm for solving nonsymmetric systems of linear equations, SIAM J.

Sci. Statist. Comput., 7 (1986), pp. 840“855.

[73] M. Engelman, G. Strang, and K. J. Bathe, The application of quasi-

Newton methods in ¬‚uid mechanics, Internat. J. Numer. Methods Engrg., 17 (1981),

pp. 707“718.

[74] V. Faber and T. A. Manteuffel, Necessary and su¬cient conditions for the

existence of a conjugate gradient method, SIAM J. Numer. Anal., 21 (1984), pp. 352“

362.

[75] D. Feng, P. D. Frank, and R. B. Schnabel, Local convergence analysis of

tensor methods for nonlinear equations, Math. Programming, 62 (1993), pp. 427“459.

[76] R. Fletcher, Conjugate gradient methods for inde¬nite systems, in Numerical

Analysis Dundee 1975, G. Watson, ed., Springer-Verlag, Berlin, New York, 1976,

pp. 73“89.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

157

BIBLIOGRAPHY

[77] R. W. Freund, A transpose-free quasi-minimal residual algorithm for non-

Hermitian linear systems, SIAM J. Sci. Comput., 14 (1993), pp. 470“482.

[78] R. W. Freund, G. H. Golub, and N. M. Nachtigal, Iterative solution of

linear systems, Acta Numerica, 1 (1991), pp. 57“100.

[79] R. W. Freund, M. H. Gutknecht, and N. M. Nachtigal, An implemen-

tation of the look-ahead Lanczos algorithm for non-Hermitian matrices, SIAM J. Sci.

Comput., 14 (1993), pp. 137“158.

[80] R. W. Freund and N. M. Nachtigal, QMR: a quasi-minimal residual

algorithm for non-hermitian linear systems, Numer. Math., 60 (1991), pp. 315“339.

[81] , An implementation of the QMR method based on coupled two-term

recurrences, SIAM J. Sci. Comput., 15 (1994), pp. 313“337.

[82] D. M. Gay, Some convergence properties of Broyden™s method, SIAM J. Numer.

Anal., 16 (1979), pp. 623“630.

[83] C. W. Gear, Numerical Initial Value Problems in Ordinary Di¬erential Equa-

tions, Prentice-Hall, Englewood Cli¬s, NJ, 1971.

[84] R. R. Gerber and F. T. Luk, A generalized Broyden™s method for solving

simultaneous linear equations, SIAM J. Numer. Anal., 18 (1981), pp. 882“890.

[85] P. E. Gill and W. Murray, Quasi-Newton methods for unconstrained

optimization, J. I. M. A., 9 (1972), pp. 91“108.

[86] , Safeguarded steplength algorithms for optimization using descent methods,

Tech. Report NAC 37, National Physical Laboratory Report, Teddington, England,

1974.

[87] P. E. Gill, W. Murray, and M. H. Wright, Practical Optimization,

Academic Press, London, 1981.

[88] A. A. Goldstein, Constructive Real Analysis, Harper and Row, New York,

1967.

[89] G. H. Golub and C. G. VanLoan, Matrix Computations, Johns Hopkins

University Press, Baltimore, 1983.

[90] A. Griewank, Analysis and modi¬cation of Newton™s method at singularities,

PhD thesis, Australian National University, 1981.

[91] , Rates of convergence for secant methods on nonlinear problems in Hilbert

space, in Numerical Analysis, Proceedings Guanajuato , Mexico 1984, Lecture Notes

in Math., No, 1230, J. P. Hennart, ed., Springer-Verlag, Heidelberg, 1986, pp. 138“

157.

[92] , The solution of boundary value problems by Broyden based secant methods,

in Computational Techniques and Applications: CTAC 85, Proceedings of CTAC,

Melbourne, August 1985, J. Noye and R. May, eds., North Holland, Amsterdam,

1986, pp. 309“321.

[93] , On the iterative solution of di¬erential and integral equations using secant

updating techniques, in The State of the Art in Numerical Analysis, A. Iserles and

M. J. D. Powell, eds., Clarendon Press, Oxford, 1987, pp. 299“324.

[94] A. Griewank and G. F. Corliss, eds., Automatic Di¬erentiation of Algo-

rithms: Theory, Implementation, and Application, Society for Industrial and Applied

Mathematics, Philadelphia, PA, 1991.

[95] A. Griewank and P. L. Toint, Local convergence analysis for partitioned

quasi-newton updates, Numer. Math., 39 (1982), pp. 429“448.

[96] , Partitioned variable metric methods for large sparse optimization problems,

Numer. Math., 39 (1982), pp. 119“137.

[97] W. A. Gruver and E. Sachs, Algorithmic Methods In Optimal Control,

Pitman, London, 1980.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

158 BIBLIOGRAPHY

[98] M. H. Gutknecht, Variants of BICGSTAB for matrices with complex spectrum,

SIAM J. Sci. Comput., 14 (1993), pp. 1020“1033.

[99] W. Hackbusch, Multi-Grid Methods and Applications, vol. 4 of Springer Series

in Computational Mathematics, Springer-Verlag, New York, 1985.

[100] , Multigrid methods of the second kind, in Multigrid Methods for Integral

and Di¬erential Equations, Oxford University Press, Oxford, 1985.

[101] W. E. Hart and S. O. W. Soul, Quasi-Newton methods for discretized

nonlinear boundary problems, Journal of the Institute of Applied Mathematics, 11

(1973), pp. 351“359.

[102] H. V. Henderson and S. R. Searle, On deriving the inverse of a sum of

matrices, SIAM Rev., 23 (1981), pp. 53“60.

[103] M. R. Hestenes and E. Steifel, Methods of conjugate gradient for solving

linear systems, J. of Res. Nat. Bureau Standards, 49 (1952), pp. 409“436.

[104] D. M. Hwang and C. T. Kelley, Convergence of Broyden™s method in Banach

spaces, SIAM J. Optim., 2 (1992), pp. 505“532.

[105] E. Isaacson and H. B. Keller, Analysis of numerical methods, John Wiley,

New York, 1966.

[106] L. Kantorovich and G. Akilov, Functional Analysis, 2nd ed., Pergamon

Press, New York, 1982.

[107] L. V. Kantorovich, Functional analysis and applied mathematics, Uspehi Mat.

Nauk., 3 (1948), pp. 89“185. translation by C. Benster as Nat. Bur. Standards Report

1509. Washington, D. C., 1952.

[108] H. B. Keller, Newton™s method under mild di¬erentiability conditions, J.

Comput. Sys. Sci, 4 (1970), pp. 15“28.

[109] , Lectures on Numerical Methods in Bifurcation Theory, Tata Institute of

Fundamental Research, Lectures on Mathematics and Physics, Springer-Verlag, New

York, 1987.

[110] C. T. Kelley, Solution of the Chandrasekhar H-equation by Newton™s method,

J. Math. Phys., 21 (1980), pp. 1625“1628.

[111] , A fast multilevel algorithm for integral equations, SIAM J. Numer. Anal.,

32 (1995), pp. 501“513.

[112] C. T. Kelley and E. W. Sachs, A quasi-Newton method for elliptic boundary

value problems, SIAM J. Numer. Anal., 24 (1987), pp. 516“531.

[113] , A pointwise quasi-Newton method for unconstrained optimal control

problems, Numer. Math., 55 (1989), pp. 159“176.

[114] , Fast algorithms for compact ¬xed point problems with inexact function

evaluations, SIAM J. Sci. Statist. Comput., 12 (1991), pp. 725“742.

[115] , A new proof of superlinear convergence for Broyden™s method in Hilbert

space, SIAM J. Optim., 1 (1991), pp. 146“150.

[116] , Pointwise Broyden methods, SIAM J. Optim., 3 (1993), pp. 423“441.

[117] , Multilevel algorithms for constrained compact ¬xed point problems, SIAM

J. Sci. Comput., 15 (1994), pp. 645“667.

[118] C. T. Kelley and R. Suresh, A new acceleration method for Newton™s method

at singular points, SIAM J. Numer. Anal., 20 (1983), pp. 1001“1009.

[119] C. T. Kelley and Z. Q. Xue, Inexact Newton methods for singular problems,

Optimization Methods and Software, 2 (1993), pp. 249“267.

[120] , GMRES and integral operators, SIAM J. Sci. Comput., 17 (1996), pp. 217“

226.

[121] T. Kerkhoven and Y. Saad, On acceleration methods for coupled nonlinear

elliptic systems, Numerische Mathematik, 60 (1992), pp. 525“548.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

159

BIBLIOGRAPHY

[122] C. Lanczos, Solution of linear equations by minimized iterations, J. Res. Natl.

Bur. Stand., 49 (1952), pp. 33“53.

[123] T. A. Manteuffel, Adaptive procedure for estimating parameters for the

nonsymmetric Tchebychev iteration, Numer. Math., 31 (1978), pp. 183“208.

[124] T. A. Manteuffel and S. Parter, Preconditioning and boundary conditions,

SIAM J. Numer. Anal., 27 (1990), pp. 656“694.

[125] E. S. Marwil, Convergence results for Schubert™s method for solving sparse

nonlinear equations, SIAM J. Numer. Anal., (1979), pp. 588“604.

[126] S. McCormick, Multilevel Adaptive Methods for Partial Di¬erential Equations,

Society for Industrial and Applied Mathematics, Philadelphia, PA, 1989.

[127] J. A. Meijerink and H. A. van der Vorst, An iterative solution method

for linear systems of which the coe¬cient matrix is a symmetric M-matrix, Math.

Comput., 31 (1977), pp. 148“162.

[128] C. D. Meyer, Matrix Analysis and Applied Linear Algebra, forthcoming.

´

[129] J. J. More, Recent developments in algorithms and software for trust region

methods, in Mathematical Programming: The State of the Art, A. Bachem,

M. Gr¨schel, and B. Korte, eds., Springer-Verlag, Berlin, 1983, pp. 258“287.

o

´

[130] J. J. More and J. A. Trangenstein, On the global convergence of Broyden™s

method, Math. Comput., 30 (1976), pp. 523“540.

[131] T. E. Mott, Newton™s method and multiple roots, Amer. Math. Monthly, 64

(1957), pp. 635“638.

[132] T. W. Mullikin, Some probability distributions for neutron transport in a half

space, J. Appl. Probab., 5 (1968), pp. 357“374.

[133] W. Murray and M. L. Overton, Steplength algorithms for minimizing a

class of nondi¬erentiable functions, Computing, 23 (1979), pp. 309“331.

[134] N. M. Nachtigal, S. C. Reddy, and L. N. Trefethen, How fast are

nonsymmetric matrix iterations?, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 778“

795.

[135] N. M. Nachtigal, L. Reichel, and L. N. Trefethen, A hybrid gmres

algorithm for nonsymmetric linear systems, SIAM J. Matrix Anal. Appl., 13 (1992).

[136] S. G. Nash, Preconditioning of truncated Newton methods, SIAM J. Sci. Statist.

Comput., 6 (1985), pp. 599“616.

[137] , Who was Raphson? (an answer). Electronic Posting to NA-Digest,

v92n23, 1992.

[138] J. L. Nazareth, Conjugate gradient methods less dependent on conjugacy,

SIAM Rev., 28 (1986), pp. 501“512.

[139] O. Nevanlinna, Convergence of Iterations for Linear Equations, Birkh¨user,

a

Basel, 1993.

[140] I. Newton, The Mathematical Papers of Isaac Newton (7 volumes), D. T.

Whiteside, ed., Cambridge University Press, Cambridge, 1967“1976.

[141] B. Noble, Applied Linear Algebra, Prentice Hall, Englewood Cli¬s, NJ, 1969.

[142] J. Nocedal, Theory of algorithms for unconstrained optimization, Acta Numer-

ica, 1 (1991), pp. 199“242.

[143] D. P. O™Leary, Why Broyden™s nonsymmetric method terminates on linear

equations, SIAM J. Optim., 4 (1995), pp. 231“235.

[144] J. M. Ortega, Numerical Analysis A Second Course, Classics in Appl. Math.,

No. 3, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1990.

[145] J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear

Equations in Several Variables, Academic Press, New York, 1970.

[146] A. M. Ostrowski, Solution of Equations and Systems of Equations, Academic

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

160 BIBLIOGRAPHY

Press, New York, 1960.

[147] B. N. Parlett, The Symmetric Eigenvalue Problem, Prentice Hall, Englewood

Cli¬s, NJ, 1980.

[148] B. N. Parlett, D. R. Taylor, and Z. A. Liu, A look-ahead Lanczos

algorithm for unsymmetric matrices, Math. Comput., 44 (1985), pp. 105“124.

[149] D. W. Peaceman and H. H. Rachford, The numerical solution of parabolic

and elliptic di¬erential equations, J. Soc. Indus. Appl. Math., 11 (1955), pp. 28“41.

[150] G. Peters and J. Wilkinson, Inverse iteration, ill-conditioned equations and

Newton™s method, SIAM Rev., 29 (1979), pp. 339“360.

[151] L. R. Petzold, A description of DASSL: a di¬erential/algebraic system solver,

in Scienti¬c Computing, R. S. Stepleman et al., eds., North Holland, Amsterdam,

1983, pp. 65“68.

[152] E. Picard, M´moire sur la th´orie des ´quations aux d´riv´es partielles et la

e e e ee

m´thode des approximations successives, J. de Math. ser 4, 6 (1890), pp. 145“210.

e

[153] M. J. D. Powell, A hybrid method for nonlinear equations, in Numerical

Methods for Nonlinear Algebraic Equations, Gordon and Breach, New York, 1970,

pp. 87“114.

[154] , Convergence properties of a class of minimization algorithms, in Nonlinear

Programming 2, O. L. Mangasarian, R. R. Meyer, and S. M. Robinson, eds.,

Academic Press, New York, 1975, pp. 1“27.

[155] L. B. Rall, Convergence of the Newton process to multiple solutions, Numer.

Math., 9 (1961), pp. 23“37.

[156] J. Raphson, Analysis aequationum universalis seu ad aequationes algebraicas

resolvendas methodus generalis, et expedita, ex nova in¬nitarum serierum doctrina,

deducta ac demonstrata. Original in British Library, London, 1690.

[157] G. W. Reddien, On Newton™s method for singular problems, SIAM J. Numer.

Anal., 15 (1978), pp. 993“986.

[158] J. K. Reid, Least squares solution of sparse systems of nonlinear equations by a

modi¬ed Marquardt algorithm, in Proc. NATO Conf. at Cambridge, July 1972, North

Holand, Amsterdam, 1973, pp. 437“445.

[159] W. C. Rheinboldt, Numerical Analysis of Parametrized Nonlinear Equations,

John Wiley, New York, 1986.

[160] J. R. Rice, Experiments on Gram-Schmidt orthogonalization, Math. Comput.,

20 (1966), pp. 325“328.

[161] T. J. Rivlin, The Chebyshev Polynomials, John Wiley, New York, 1974.

[162] H. L. Royden, Real Analysis, 2nd ed., Macmillan, New York, 1968.

[163] Y. Saad, Practical use of polynomial preconditionings for the conjugate gradient

method, SIAM J. Sci. Statist. Comput., 6 (1985), pp. 865“881.

[164] , Least squares polynomials in the complex plane and their use for solving

nonsymmetric linear systems, SIAM J. Numer. Anal., 24 (1987), pp. 155“169.

[165] , ILUT: A dual threshold incomplete LU factorization, Tech. Report 92-38,

Computer Science Department, University of Minnesota, 1992.

[166] , A ¬‚exible inner-outer preconditioned GMRES algorithm, SIAM J. Sci.

Comput., (1993), pp. 461“469.

[167] Y. Saad and M. Schultz, GMRES a generalized minimal residual algorithm

for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 (1986),

pp. 856“869.

[168] E. Sachs, Broyden™s method in Hilbert space, Math. Programming, 35 (1986),

pp. 71“82.

[169] P. E. Saylor and D. C. Smolarski, Implementation of an adaptive algorithm

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

161

BIBLIOGRAPHY

for Richardson™s method, Linear Algebrra Appl., 154/156 (1991), pp. 615“646.

[170] R. B. Schnabel and P. D. Frank, Tensor methods for nonlinear equations,

SIAM J. Numer. Anal., 21 (1984), pp. 815“843.

¨

¨

[171] E. Schroder, Uber unendlich viele Algorithmen zur Au¬‚osung der Gleichungen,

Math. Ann., 2 (1870), pp. 317“365.

[172] L. K. Schubert, Modi¬cation of a quasi-Newton method for nonlinear equations

with sparse Jacobian, Math. Comput., 24 (1970), pp. 27“30.

[173] G. A. Schultz, R. B. Schnabel, and R. H. Byrd, A family of trust-region-

based algorithms for unconstrained minimization with strong global convergence

properties, SIAM J. Numer. Anal., 22 (1985), pp. 47“67.

[174] V. E. Shamanskii, A modi¬cation of Newton™s method, Ukran. Mat. Zh., 19

(1967), pp. 133“138. (In Russian.)

[175] A. H. Sherman, On Newton-iterative methods for the solution of systems of

nonlinear equations, SIAM J. Numer. Anal., 14 (1978), pp. 755“774.

[176] J. Sherman and W. J. Morrison, Adjustment of an inverse matrix corre-

sponding to changes in the elements of a given column or a given row of the original

matrix (abstract), Ann. Math. Statist., 20 (1949), p. 621.

[177] , Adjustment of an inverse matrix corresponding to a change in one element

of a given matrix, Ann. Math. Statist., 21 (1950), pp. 124“127.

[178] K. Sigmon, MATLAB Primer, Fourth Edition, CRC Press, Boca Raton, FL,

1994.

[179] D. C. Smolarski and P. E. Saylor, An optimum iterative method for solving

any linear system with a square matrix, BIT, 28 (1988), pp. 163“178.

[180] P. Sonneveld, CGS, a fast Lanczos-type solver for nonsymmetric linear

systems, SIAM J. Sci. Statist. Comput., 10 (1989), pp. 36“52.

[181] D. C. Sorensen, Newton™s method with a model trust region modi¬cation, SIAM

J. Numer. Anal., 19 (1982), pp. 409“426.

[182] G. Starke, Alternating direction preconditioning for nonsymmetric systems of

linear equations, SIAM J. Sci. Comput., 15 (1994), pp. 369“385.

[183] G. Starke and R. S. Varga, A hybrid Arnoldi-Faber iterative method for

nonsymmetric systems of linear equations, Numer. Math., 64 (1993), pp. 213“239.

[184] G. W. Stewart, Introduction to matrix computations, Academic Press, New

York, 1973.

[185] E. L. Stiefel, Kernel polynomials in linear algebra and their numerical

applications, U.S. National Bureau of Standards, Applied Mathematics Series, 49

(1958), pp. 1“22.

[186] P. N. Swarztrauber, The methods of cyclic reduction, Fourier analysis and

the FACR algorithm for the discrete solution of Poisson™s equation on a rectangle,

SIAM Rev., 19 (1977), pp. 490“501.

[187] , Approximate cyclic reduction for solving Poisson™s equation, SIAM J. Sci.

Statist. Comput., 8 (1987), pp. 199“209.

[188] P. N. Swarztrauber and R. A. Sweet, Algorithm 541: E¬cient FORTRAN

subprograms for the solution of elliptic partial di¬erential equations, ACM Trans.

Math. Software, 5 (1979), pp. 352“364.

[189] P. L. Toint, On sparse and symmetric matrix updating subject to a linear

equation, Math. Comput., 31 (1977), pp. 954“961.

[190] J. F. Traub, Iterative Methods for the Solution of Equations, Prentice Hall,

Englewood Cli¬s, NJ, 1964.

[191] H. A. van der Vorst, Bi-CGSTAB: A fast and smoothly converging variant to

Bi-CG for the solution of nonsymmetric systems, SIAM J. Sci. Statist. Comput., 13

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

162 BIBLIOGRAPHY

(1992), pp. 631“644.

[192] H. A. van der Vorst and C. Vuik, The superlinear convergence behaviour

of GMRES, Journal Comput. Appl. Math., 48 (1993), pp. 327“341.

[193] R. S. Varga, Matrix Iterative Analysis, Prentice Hall, Englewood Cli¬s, NJ,

1962.

[194] E. L. Wachspress, Iterative Solution of Elliptic Systems and Applications to

the Neutron Di¬usion Equations of Reactor Physics, Prentice Hall, Englewood Cli¬s,

NJ, 1966.

[195] H. F. Walker, Implementation of the GMRES method using Householder

transformations, SIAM J. Sci. Statist. Comput., 9 (1989), pp. 815“825.

[196] , Residual smoothing and peak/plateau behavior in Krylov subspace methods,

Applied Numer. Math., 19 (1995), pp. 279“286.

[197] H. F. Walker and L. Zhou, A simpler GMRES, J. Numer. Lin. Alg. Appl.,

6 (1994), pp. 571“581.

[198] L. T. Watson, S. C. Billups, and A. P. Morgan, Algorithm 652:

HOMPACK: A suite of codes for globally convergent homotopy algorithms, ACM

Trans. Math. Software, 13 (1987), pp. 281“310.

[199] M. A. Woodbury, Inverting modi¬ed matrices, Memorandum Report 42,

Statistical Research Group, Princeton NJ, 1950.

[200] D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New

York, 1971.

[201] T. J. Ypma, The e¬ect of rounding errors on Newton-like methods, IMA J.

Numer. Anal., 3 (1983), pp. 109“118.

[202] L. Zhou and H. F. Walker, Residual smoothing techniques for iterative

methods, SIAM J. Sci. Comput., 15 (1994), pp. 297“312.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

Index

A-conjugacy, 20 Cauchy sequence, 67

CGNE, 25

A-norm, 12

CGNR, 25

Approximate inverse, 6, 77 CGS, 48

Armijo rule, 137 algorithm, 49

Arnoldi process, 37 Chandrasekhar H-equation, 87

solution

Backtracking, 136 Broyden™s method, 128

Bad Broyden method, 132 Newton™s method, 87

Banach Lemma, 6 Newton-GMRES, 107

Bi-CG, 47 Characteristic polynomial, 34

algorithm, 47 Chord method, 74

Bi-CGSTAB, 50 algorithm, 74

algorithm, 50 convergence, 76

¬nite di¬erence, 110 Condition number, 3

Bi-conjugacy, 47 Conjugate Gradient

Bi-Conjugate Gradient, 47 algorithm, 22

algorithm, 47 ¬nite termination, 14

Bi-orthogonality, 47 minimization property, 12

Bounded deterioration, 118, 121 use of residual polynomials, 14

Breakdown, 47 Conjugate Gradient Squared, 48

Broyden sequence, 120 algorithm, 49

Broyden™s method, 113 Conjugate transpose, 34

convergence Contraction mapping, 67

linear equations, 118 Contraction Mapping Theorem, 67

nonlinear equations, 120 Convection-Di¬usion Equation

implementation, 123 Broyden™s method, 130

limited memory, 123 Newton“Armijo, 146

restarting, 123 Newton-GMRES, 108

sparse, 133 solution

Brsol Broyden™s method, 130

algorithm, 126 Convergence theorem

Brsola Broyden™s method

algorithm, 145 linear equations, 119

163

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

164 INDEX

nonlinear equations, 123 loss of orthogonality, 40, 41

chord method, 76, 77 modi¬ed, 40

contraction mapping, 67

H-equation, 87

¬nite di¬erence Newton, 81

solution

inexact Newton™s method, 96,

Broyden™s method, 128

99

Newton™s method, 87

Newton™s method, 71

Newton-GMRES, 107

Newton“Armijo, 141

H¨lder Continuity, 91

o

Newton-GMRES, 103

High order methods, 78

secant method, 82

Hybrid algorithms, 36

Shamanskii method, 79

Induced matrix norm

DASPK, 110

de¬nition, 3

Dennis“Mor´ Condition, 114

e

Inexact Newton method, 95

superlinear convergence, 115

Inner iteration, 100

Diagonalizable matrix, 34

Inverse power method, 94

Diagonalizing transformation, 34

Inverse secant equation, 132

Di¬erence approximation, 79

Inverse Tangent

choice of h, 80

Newton“Armijo, 146

directional derivative, 80

Iteration matrix, 5

Jacobian, 80

Iteration statistics, 91

nonlinearity, 80

Jacobi

Elementary Jordan block, 60

algorithm, 8

Fixed point, 66 convergence result, 8

Fixed-point iteration, 66 preconditioning, 9

Forcing term, 95 Jacobi preconditioning, 24

choice of, 104 Jordan block, 60

Gauss“Seidel Kantorovich Theorem, 83

algorithm, 9 Krylov subspace, 11

Givens rotation, 43

Globally convergent algorithm, 135 Left preconditioning, 36

GMRES Limited memory, 123

algorithm, 40, 45 Line Search

diagonalizable matrices, 34 parabolic

¬nite di¬erences, 101 three-point, 143

¬nite termination, 34 two-point, 142

GMRES(m), 39, 46 polynomial, 142

algorithm, 46 Line search, 135, 136

loss of orthogonality, 41 Linear PDE

minimization property, 33 Bi-CGSTAB, 57

restarts, 39, 46 Broyden™s method, 127

use of residual polynomials, 33 CGNR, 55

Gram“Schmidt process, 37 GMRES, 54

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

165

INDEX

solution algorithm, 23

Broyden™s method, 127 Preconditioning

TFQMR, 58 conjugate gradient, 19

Lipschitz continuous GMRES, 36

de¬nition, 67 incomplete

Local improvement, 75 factorization, 24, 36

inexact, 100 Jacobi, 24

Poisson solver, 27, 36, 54

Matrix norm, 3 polynomial, 24, 36

Matrix splitting, 7 Richardson iteration, 6

Modi¬ed Bratu Problem, 132

Quasi-minimization, 52

Newton direction, 136

Quasi-Newton methods, 113

Newton™s method, 71

Quasi-residual, 51

algorithm, 74

Quasi-residual norm, 52

convergence rate, 71

¬nite di¬erences, 81 Relaxation parameter, 9

stagnation, 81 Reorthogonalization, 41

inexact, 95 Residual

monotone convergence, 92 linear, 11

termination, 72 as error estimate, 4

Newton-GMRES, 101 nonlinear

convergence rates, 103

as error estimate, 72

¬nite di¬erences, 101

relative nonlinear, 72

safeguarding, 105

Residual polynomials

Newton-iterative method, 100

application to CG, 14

Normal matrix, 34

application to CGNR, 25

Nsol

application to GMRES, 33

algorithm, 86

application to TFQMR, 52

Nsola

de¬nition, 14

algorithm, 138

Richardson iteration, 5

Nsolgm

nonlinear, 66

algorithm, 105

Right preconditioning, 36

Outer iteration, 100

Schubert algorithm, 133

Over solving, 104

Search direction, 136

Secant equation, 113

Picard iteration, 66

Secant method, 82

Poisson solver, 24, 27

convergence, 82

preconditioner

q-order, 91

for CG, 24, 27

r-order, 91

for GMRES, 36, 54

Secant update, 113

Polynomial preconditioning, 24, 36

Shamanskii method, 78

Richardson iteration, 7

algorithm, 78

Positive de¬nite matrix, 11

convergence, 79

Preconditioned CG

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.

Copyright ©1995 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

166 INDEX

¬nite di¬erences, 91

Sherman“Morrison

formula, 124, 132

Sherman“Morrison“Woodbury for-

mula, 132

Simple decrease, 136

SOR

algorithm, 9

SPD matrix, 12

Spectral radius, 7

Stagnation, 94

Newton™s method, 75

Stationary iterative methods, 5

as preconditioners, 24

Successive substitution, 66

Su¬cient decrease, 137

Symmetric Gauss“Seidel

algorithm, 9

preconditioning, 9

Symmetric matrix, 11

Symmetric positive de¬nite matrix,

12

Termination

CG, 16

GMRES, 35, 38

linear equations, 4

nonlinear equations, 72

TFQMR, 52

TFQMR, 51

algorithm, 53

¬nite di¬erence, 110

weights, 52

Weighted norm, 98

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR16.html.