Arbor, MI, 1975.

[145] R. Hooke and T. A. Jeeves, “Direct search" solution of numerical and statistical prob-

lems, J. Assoc. Comput. Mach., 8 (1961), pp. 212“229.

[146] R. Horst, P. M. Pardolos, and N. V. Thoai, Introduction to Global Optimization,

Kluwer Academic Publishers, Dordrecht, the Netherlands, 1995.

[147] W. Huyer and A. Neumaier, Global Optimization by Multilevel Coordinate Search,

Institut f¨ r Mathematik, Universit¨ t Wien, 1997, preprint.

u a

[148] D. M. Hwang and C. T. Kelley, Convergence of Broyden™s method in Banach spaces,

SIAM J. Optim., 2 (1992), pp. 505“532.

[149] G. Jasen, Investment dartboard: Pros and dart throwers both lose money, Your Money

Matters, Wall Street Journal, May 7, 1997.

[150] D. R. Jones, C. C. Perttunen, and B. E. Stuckman, Lipschitzian optimization without

the Lipschitz constant, J. Optim. Theory Appl., 79 (1993), pp. 157“181.

[151] L. Kantorovich and G. Akilov, Functional Analysis, 2nd ed., Pergamon Press, New

York, 1982.

[152] R. B. Kearfott, Rigorous Global Search: Continuous Problems, Kluwer, Dordrecht,

the Netherlands, 1966.

[153] C. T. Kelley, Identi¬cation of the support of nonsmoothness, in Large Scale Optimiza-

tion: State of the Art, W. W. Hager, D. W. Hearn, and P. Pardalos, eds., Kluwer Academic

Publishers B.V., Boston, 1994, pp. 192“205.

, Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, 1995.

[154]

, Detection and remediation of stagnation in the Nelder-Mead algorithm using a

[155]

suf¬cient decrease condition, SIAM J. Optim., 10 (1999), pp. 43“55.

[156] C. T. Kelley, C. T. Miller, and M. D. Tocci, Termination of Newton/chord iterations

and the method of lines, SIAM J. Sci. Comput., 19 (1998), pp. 280“290.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

170 BIBLIOGRAPHY

[157] C. T. Kelley and E. W. Sachs, Applications of quasi-Newton methods to pseu-

doparabolic control problems, in Optimal Control of Partial Differential Equations II”

Theory and Applications, Birkh¨ user, Basel, 1987.

a

, Quasi-Newton methods and unconstrained optimal control problems, SIAM J.

[158]

Control Optim., 25 (1987), pp. 1503“1516.

, A pointwise quasi-Newton method for unconstrained optimal control problems,

[159]

Numer. Math., 55 (1989), pp. 159“176.

, Pointwise Broyden methods, SIAM J. Optim., 3 (1993), pp. 423“441.

[160]

, Multilevel algorithms for constrained compact ¬xed point problems, SIAM J. Sci.

[161]

Comput., 15 (1994), pp. 645“667.

, Local convergence of the symmetric rank-one iteration, Computational Optimiza-

[162]

tion and Applications, 9 (1998), pp. 43“63.

, A trust region method for parabolic boundary control problems, SIAM J. Optim.,

[163]

9 (1999), pp. 1064“1081.

[164] C. T. Kelley, E. W. Sachs, and B. Watson, A pointwise quasi-Newton method for

unconstrained optimal control problems, II, J. Optim. Theory Appl., 71 (1991), pp. 535“

547.

[165] H. F. Khalfan, R. H. Byrd, and R. B. Schnabel, A theoretical and experimental study

of the symmetric rank-one update, SIAM J. Optim., 3 (1993), pp. 1“24.

[166] S. Kirkpatrick, C. D. Geddat, and M. P. Vecchi, Optimization by simulated annealing,

Science, 220 (1983), pp. 671“680.

[167] J. R. Koehler and A. B. Owen, Computer experiments, in Handbook of Statistics, vol.

13, S. Shosh and C. R. Rao, eds., Elsevier, New York, 1996, pp. 261“308.

[168] J. Kostrowicki and L. Piela, Diffusion equation method of global minimization: Per-

formance for standard test functions, J. Optim. Theory Appl., 71 (1991), pp. 269“284.

[169] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, Convergence Properties

of the Nelder-Mead Simplex Algorithm in Low Dimensions, SIAM J. Optim., 9 (1998),

pp. 112“147.

[170] E. B. Lee and L. Markus, Foundations of Optimal Control Theory, John Wiley, New

York, London, Sydney, 1967.

[171] C. Lemar´ chal, A view of line searches, in Optimization and Optimal Control, Auslander,

e

Oettli, and Stoer, eds., Lecture Notes in Control and Information Sciences 30, Springer-

Verlag, Berlin, 1981, pp. 59“78.

[172] K. Levenberg, A method for the solution of certain nonlinear problems in least squares,

Quart. Appl. Math., 4 (1944), pp. 164“168.

[173] R. M. Lewis and V. Torczon, Rank ordering and positive bases in pattern search algo-

rithms, Tech. Rep. 96-71, Institute for Computer Applications in Science and Engineering,

December, 1996.

, Pattern search algorithms for linearly constrained minimization, SIAM J. Optim.,

[174]

9 (1999), pp. 1082“1099.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

BIBLIOGRAPHY 171

, Pattern search algorithms for linearly constrained minimization, SIAM J. Optim.,

[175]

10 (2000), pp. 917“941.

[176] D. C. Liu and J. Nocedal, On the limited memory BFGS method for large-scale opti-

mization, Math. Programming., 43 (1989), pp. 503“528.

[177] R. B. Long and W. C. Thacker, Data assimilation into a numerical equatorial ocean

model, part 2: Assimilation experiments, Dyn. Atmos. Oceans, 13 (1989), pp. 465“477.

[178] E. M. Lowndes, Vehicle Dynamics and Optimal Design, Ph.D. thesis, North Carolina

State University, Raleigh, 1998.

[179] S. Lucidi and M. Sciandrone, On the global convergence of derivative free methods

for unconstrained optimization, Universit` di Roma “La Sapienza”, Dipartimento di

a

Informatica e Sistemistica, 1997, reprint.

[180] D. G. Luenberger, Linear and Nonlinear Programming, Addison“Wesley, London,

1984.

[181] J. N. Lyness and C. B. Moler, Numerical differentiation of analytic functions, SIAM J.

Numer. Anal., 4 (1967), pp. 202“210.

[182] C. D. Maranas and C. A. Floudas, A global optimization method for Weber™s problem,

in Large Scale Optimization: State of the Art, W. W. Hager, D. W. Hearn, and P. Pardalos,

eds., Kluwer Academic Publishers B.V., Boston, 1994, pp. 259“293.

[183] D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters,

J. Soc. Indust. Appl. Math., 11 (1963), pp. 431“441.

[184] J. M. Martinez, Quasi-Newton methods for solving underdetermined nonlinear simul-

taneous equations, J. Comput. Appl. Math., 34 (1991), pp. 171“190.

[185] E. S. Marwil, Exploiting Sparsity in Newton-Type Methods, Ph.D. thesis, Cornell Uni-

versity, Ithaca, NY, 1978.

[186] H. Matthies and G. Strang, The solution of nonlinear ¬nite element equations, Internat.

J. Numer. Methods Engrg., 14 (1979), pp. 1613“1626.

[187] D. Q. Mayne and E. Polak, Nondifferential optimization via adaptive smoothing, J.

Optim. Theory Appl., 43 (1984), pp. 601“613.

[188] K. I. M. McKinnon, Convergence of the Nelder-Mead Simplex Method to a Non-

Stationary Point, SIAM J. Optim., 9 (1998), pp. 148“158.

[189] E. H. Moore, General Analysis, Memoirs of the American Philosophy Society, 1935. I.

[190] J. J. Mor´ , The Levenberg-Marquardt algorithm: Implementation and theory, in Numer-

e

ical Analysis, G. A. Watson, ed., Lecture Notes in Mathematics 630, Springer-Verlag,

Berlin, 1977, pp. 105“116.

, Trust regions and projected gradients, in System Modelling and Optimization,

[191]

Lecture Notes in Control and Information Sciences 113, Springer-Verlag, Berlin, 1988,

pp. 1“13.

[192] J. J. Mor´ and D. C. Sorensen, Computing a trust region step, SIAM J. Sci. Statist.

e

Comput., 4 (1983), pp. 553“572.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

172 BIBLIOGRAPHY

[193] J. J. Mor´ and D. J. Thuente, Line search algorithms with guaranteed suf¬cient de-

e

crease, ACM Trans. Math. Software, 20 (1994), pp. 286“307.

[194] J. J. Mor´ and G. Toraldo, On the solution of large quadratic programming problems

e

with bound constraints, SIAM J. Optim., 1 (1991), pp. 93“113.

[195] J. J. Mor´ and S. J. Wright, Optimization Software Guide, SIAM, Philadelphia, 1993.

e

[196] J. J. Mor´ and Z. Wu, Global continuation for distance geometry problems, SIAM J.

e

Optim., 7 (1997), pp. 814“836.

[197] W. Murray and M. L. Overton, Steplength algorithms for minimizing a class of non-

differentiable functions, Computing, 23 (1979), pp. 309“331.

[198] S. G. Nash, Newton-type minimization via the Lanczos method, SIAM J. Numer. Anal.,

21 (1984), pp. 770“788.

, Preconditioning of truncated-Newton methods, SIAM J. Sci. Statist. Comput., 6

[199]

(1985), pp. 599“616.

[200] L. Nazareth, A relationship between the BFGS and conjugate gradient algorithm and

its implications for new algorithms, SIAM J. Numer. Anal., 16 (1979), pp. 794“800.

, Conjugate gradient methods less dependent on conjugacy, SIAM Rev., 28 (1986),

[201]

pp. 501“511.

, A view of conjugate gradient-related algorithms for nonlinear optimization, in

[202]

Linear and Nonlinear Conjugate Gradient Methods, L. M. Adams and J. L. Nazareth,

eds., SIAM, Philadelphia, 1996, pp. 149“164.

[203] J. A. Nelder. Private Communication, 1998.

[204] J. A. Nelder and R. Mead, A simplex method for function minimization, Comput. J., 7

(1965), pp. 308“313.

[205] A. Neumaier, On convergence and restart conditions for a nonlinear conjugate gradient

method. Institut f¨ r Mathematik, Universit¨ t Wien, 1997, preprint.

u a

[206] J. Nocedal, Updating quasi-Newton matrices with limited storage, Math. Comp., 35

(1980), pp. 773“782.

, Theory of algorithms for unconstrained optimization, Acta Numerica, 1 (1991),

[207]

pp. 199“242.

, Conjugate gradient methods and nonlinear optimization, in Linear and Nonlinear

[208]

Conjugate Gradient Methods, L. M. Adams and J. L. Nazareth, eds., SIAM, Philadelphia,

pp. 9“23.

[209] J. Nocedal and Y. Yuan, Combining Trust Region and Line Search Techniques, Tech.

Rep. OTC 98/04, Optimization Technology Center, Northwestern University, Chicago,

IL, 1998.

[210] J. A. Northby, Structure and binding of Lennard-Jones clusters: 13 ¤ n ¤ 147, J.

Chem. Phys., 87 (1987), pp. 6166“6177.

[211] J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in

Several Variables, Academic Press, New York, 1970.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

BIBLIOGRAPHY 173

[212] R. Penrose, A generalized inverse for matrices, Proc. Cambridge Phil. Soc., 51 (1955),

pp. 406“413.

[213] L. R. Petzold, A description of DASSL: A differential/algebraic system solver, in Scien-

ti¬c Computing, R. S. Stepleman et al., ed., North“Holland, Amsterdam, 1983, pp. 65“68.

[214] S. A. Piyawskii, An algorithm for ¬nding the absolute extremum of a function, USSR

Comp. Math. and Math. Phys., 12 (1972), pp. 57“67.

[215] E. Polak and G. Ribi` re, Note sur la convergence de methodes de directions conjug´ es,

e e

Rev Fran¸ aise Informat Recherche Operationelle, 3e Ann´ e, 16 (1969), pp. 35“43.

c e

[216] B. T. Polyak, The conjugate gradient method in extremal problems, USSR Comp. Math.

and Math. Phys., 9 (1969), pp. 94“112.

[217] M. J. D. Powell, A FORTRAN Subroutine for Unconstrained Minimization, Requiring

First Derivatives of the Objective Function, Tech. Rep. AERE-R, 6469, Mathematics

Brance, A. E. R. E. Harwell, Berkshire, England, 1970.

, A hybrid method for nonlinear equations, in Numerical Methods for Nonlinear

[218]

Algebraic Equations, P. Rabinowitz, ed., Gordon and Breach, New York, 1970, pp. 87“

114.

, A new algorithm for unconstrained optimization, in Nonlinear Programming, J. B.

[219]

Rosen, O. L. Mangasarian, and K. Ritter, eds., Academic Press, New York, 1970, pp. 31“

65.

, Convergence properties of a class of minimization algorithms, in Nonlinear Pro-

[220]

gramming 2, O. L. Mangasarian, R. R. Meyer, and S. M. Robinson, eds., Academic Press,

New York, 1975, pp. 1“27.

, Some global convergence properties of a variable metric algorithm without ex-

[221]

act line searches, in Nonlinear Programming, R. Cottle and C. Lemke, eds., American

Mathematical Society, Providence, RI, 1976, pp. 53“72.

, Nonconvex minimization calculations and the conjugate gradient method, Lecture

[222]

Notes in Mathematics 1066, Springer-Verlag, Berlin, (1984), pp. 122“141.

, On the global convergence of trust region algorithms for unconstrained minimiza-

[223]

tion, Math. Programming., 29 (1984), pp. 297“303.

, Convergence properties of algorithms for nonlinear optimization, SIAM Rev., 28

[224]

(1986), pp. 487“500.

, How bad are the BFGS and DFP methods when the objective function is quadratic,

[225]

Math. Programming., 34 (1986), pp. 34“47.

, Update conjugate directions by the BFGS formula, Math. Programming, 38 (1987),

[226]

pp. 29“46.

, Direct search algorithms for optimization calculations, Acta Numerica, 7 (1998),

[227]

pp. 287“336.

[228] K. Radhakrishnan and A. C. Hindmarsh, Description and Use of LSODE, the Liver-

more Solver for Ordinary Differential Equations, Tech. Rep. URCL-ID-113855, Lawrence

Livermore National Laboratory, Livermore, CA, December 1993.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

174 BIBLIOGRAPHY

[229] W. Rudin, Principles of Mathematical Analysis, McGraw-Hill, New York, 1953.

[230] J. Sacks, W. J. Welch, T. J. Mitchell, and H. P. Wynn, Designs and analysis of

computer experiments, Statist. Sci., 4 (1989), pp. 409“435.

[231] R. B. Schnabel and E. Eskow, A new modi¬ed Cholesky factorization, SIAM J. Sci.

Statist. Comput., 11 (1990), pp. 1136“1158.

[232] G. A. Schultz, R. B. Schnabel, and R. H. Byrd, A family of trust-region-based algo-

rithms for unconstrained minimization with strong global convergence properties, SIAM

J. Numer. Anal., 22 (1985), pp. 47“67.

[233] V. E. Shamanskii, A modi¬cation of Newton™s method, Ukrain. Mat. Zh., 19 (1967),

pp. 133“138 (in Russian).

[234] L. F. Shampine, Implementation of implicit formulas for the solution of ODEs, SIAM J.

Sci. Statist. Comput., 1 (1980), pp. 103“118.

, Numerical Solution of Ordinary Differential Equations, Chapman and Hall, New

[235]

York, 1994.

[236] L. F. Shampine and M. W. Reichelt, The MATLAB ODE suite, SIAM J. Sci. Comput.,

18 (1997), pp. 1“22.

[237] D. F. Shanno, Conditioning of quasi-Newton methods for function minimization, Math.

Comp., 24 (1970), pp. 647“657.

, On the variable metric methods for sparse Hessians, Math. Comp., 34 (1980),

[238]

pp. 499“514.

[239] J. Sherman and W. J. Morrison, Adjustment of an inverse matrix corresponding to

changes in the elements of a given column or a given row of the original matrix (abstract),

Ann. Math. Statist., 20 (1949), p. 621.

, Adjustment of an inverse matrix corresponding to a change in one element of a

[240]

given matrix, Ann. Math. Statist., 21 (1950), pp. 124“127.

[241] B. O. Shubert, A sequential method seeking the global maximum of a function, SIAM

J. Numer. Anal., 9 (1972), pp. 379“388.

[242] D. C. Sorensen, Newton™s method with a model trust region modi¬cation, SIAM J.

Numer. Anal., 19 (1982), pp. 409“426.

, Minimization of a large-scale quadratic function subject to a spherical constraint,

[243]

SIAM J. Optim., 7 (1997), pp. 141“161.

[244] W. Spendley, G. R. Hext, and F. R. Himsworth, Sequential application of simplex

designs in optimisation and evolutionary operation, Technometrics, 4 (1962), pp. 441“

461.

[245] W. Squire and G. Trapp, Using complex variables to estimate derivatives of real func-

tions, SIAM Rev., 40 (1998), pp. 110“112.

[246] M. Srinivas and L. M. Patnaik, Genetic algorithms: A survey, Computer, 27 (1994),

pp. 17“27.

[247] T. Steihaug, The conjugate gradient method and trust regions in large scale optimization,

SIAM J. Numer. Anal., 20 (1983), pp. 626“637.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

BIBLIOGRAPHY 175

[248] C. P. Stephens and W. Baritompa, Global optimization requires global information, J.

Optim. Theory Appl., 96 (1998), pp. 575“588.

[249] G. W. Stewart, Introduction to Matrix Computations, Academic Press, New York, 1973.

[250] D. Stoneking, G. Bilbro, R. Trew, P. Gilmore, and C. T. Kelley, Yield optimiza-

tion using a GaAs process simulator coupled to a physical device model, IEEE Trans.

Microwave Theory and Techniques, 40 (1992), pp. 1353“1363.

[251] D. E. Stoneking, G. L. Bilbro, R. J. Trew, P. Gilmore, and C. T. Kelley, Yield

optimization using a GaAs process simulator coupled to a physical device model, in Proc.

IEEE/Cornell Conference on Advanced Concepts in High Speed Devices and Circuits,

IEEE, Piscataway, NJ, 1991, pp. 374“383.

[252] K. Tababe, Continuous Newton-Raphson method for solving an underdetermined system

of nonlinear equations, J. Nonlinear Anal., Theory Methods Appl., 3 (1979), pp. 495“503.

, Global analysis of continuous analogs of the Levenberg-Marquardt and Newton-

[253]

Raphson methods for solving nonlinear equations, Ann. Inst. Statist. Math., B, 37 (1985),

pp. 189“203.

[254] T. Tian and J. C. Dunn, On the gradient projection method for optimal control problems

with nonnegative L2 inputs, SIAM J. Control Optim., 32 (1994), pp. 516“537.

[255] P. L. Toint, On sparse and symmetric matrix updating subject to a linear equation, Math.

Comp., 31 (1977), pp. 954“961.

, On the superlinear convergence of an algorithm for solving a sparse minimization

[256]

problem, SIAM J. Numer. Anal., 16 (1979), pp. 1036“1045.

, Towards an ef¬cient sparsity exploiting Newton method for minimization, in Sparse

[257]

Matrices and Their Uses, I. S. Duff, ed., Academic Press, New York, 1981, pp. 57“88.

, On large scale nonlinear least squares calculations, SIAM J. Sci. Statist. Comput.,

[258]

8 (1987), pp. 416“435.

, Global convergence of a class of trust“region methods for nonconvex minimization

[259]

in Hilbert space, IMA J. Numer. Anal., 8 (1988), pp. 231“252.

[260] V. Torczon. Private communication, 1997.

, Multidirectional Search, Ph.D. thesis, Rice University, Houston, TX, 1989.

[261]

, On the convergence of the multidimensional search algorithm, SIAM J. Optim., 1

[262]

(1991), pp. 123“145.

, On the convergence of pattern search algorithms, SIAM J. Optim., 7 (1997),

[263]

pp. 1“25.

[264] L. N. Trefethen and D. Bau, Numerical Linear Algebra, SIAM, Philadelphia, 1996.

[265] P. van Laarhoven and E. Aarts, Simulated Annealing, Theory and Practice, Kluwer,

Dordrecht, the Netherlands, 1987.

[266] L. N. Vicente, Trust-Region Interior-Point Algorithms for a Class of Nonlinear Program-

ming Problems, Ph.D. thesis, Rice University, Houston, TX, 1996.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

176 BIBLIOGRAPHY

[267] F. H. Walters, L. R. Parker, S. L. Morgan, and S. N. Demming, Sequential Simplex

Optimization, CRC Press, Boca Raton, FL, 1991.

[268] B. Watson, Quasi-Newton Methoden f¨ r Minimierungsprobleme mit strukturierter

u

Hesse-Matrix, Diploma Thesis, Universit¨ t Trier, 1990.

a

¨

[269] J. Werner, Uber die globale konvergenz von Variable-Metric Verfahren mit nichtexakter

Schrittweitenbestimmung, Numer. Math., 31 (1978), pp. 321“334.

[270] T. A. Winslow, R. J. Trew, P. Gilmore, and C. T. Kelley, Doping pro¬les for optimum

class B performance of GaAs mesfet ampli¬ers, in Proc. IEEE/Cornell Conference on

Advanced Concepts in High Speed Devices and Circuits, IEEE, Piscataway, NJ, 1991,

pp. 188“197.

, Simulated performance optimization of GaAs MESFET ampli¬ers, in Proc.

[271]

IEEE/Cornell Conference on Advanced Concepts in High Speed Devices and Circuits,

IEEE, Piscataway, NJ, 1991, pp. 393“402.

[272] P. Wolfe, Convergence conditions for ascent methods, SIAM Rev., 11 (1969), pp. 226“

235.

, Convergence conditions for ascent methods II: Some corrections, SIAM Rev., 13

[273]

(1971), pp. 185“188.

[274] M. H. Wright, Direct search methods: Once scorned, now respectable, in Numerical

Analysis 1995, Proc. 1995 Dundee Bienneal Conference in Numerical Analysis, D. F.

Grif¬ths and G.A. Watson, eds., 1996, Addison“Wesley Longman, Harlow, U.K., pp. 191“

208.

[275] S. J. Wright, Compact storage of Broyden-class quasi-Newton matrices, Argonne Na-

tional Laboratory, Argone, IL, 1994, preprint.

[276] S. J. Wright and J. N. Holt, An inexact Levenbert-Marquardt method for large sparse

nonlinear least squares, J. Austral. Math. Soc. Ser. B, 26 (1985), pp. 387“403.

[277] Z. Wu, The effective energy transformation scheme as a special continuation approach

to global optimization with application to molecular conformation, SIAM J. Optim., 6

(1996), pp. 748“768.

[278] T. J. Ypma, The effect of rounding errors on Newton-like methods, IMA J. Numer. Anal.,

3 (1983), pp. 109“118.

[279] S. K. Zavriev, On the global optimization properties of ¬nite-difference local descent

algorithms, J. Global Optim., 3 (1993), pp. 67“78.

[280] C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal, L-BFGS-B”FORTRAN subroutines for

large-scale bound constrained optimization, ACM Trans. Math. Software, 23 (1997),

pp. 550“560.

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

Index

-active set, 97 chord method, 19

-inactive set, 97 Newton™s method, 15

Convex quadratic, 6, 45

Actual reduction, 40 Coordinate descent algorithm, 146

trust region, 51 Critical point, 5

Adjoint equation, 10

Adjoint variable, 10 DACE, 149

Armijo rule, 39, 78 Default choice, 82

Dennis“Mor´ condition, 76

e

Backtracking, 39 Descent direction, 40

BFGS DFP update, 81

algorithm, 80 Difference approximation

global convergence, 78 Hessian, 20

implementation, 78 Hessian“vector product, 20

limited memory, 79 DIRECT

local convergence, 72 method, 135

modi¬ed update, 80 Direct search algorithms, 135

restarted, 79 Direction of negative curvature, 9, 30

update, 71 Dogleg, 58

Bounded deterioration, 74 classical, 58

Breakdown

conjugate gradient, 7 Feasible

point, 87

Cauchy decrease, 62 set, 87

Cauchy point, 55 Fletcher“Reeves formula, 49

CG-Trust region optimizer, 65 Forcing term, 28

CGTRUST Forward difference CG

algorithm, 65 algorithm, 30

Cholesky factorization, 7, 16 Forward difference PCG

Chord method, 19 algorithm, 31

algorithm, 19 Frobenius norm, 77

Classical dogleg Fundamental theorem of calculus, 4

algorithm, 63

Conjugate gradient, 7, 28 Gauss“Newton

algorithm, 7 damped, 47

Constrained optimization problem, 3 Gauss“Newton method, 23

Constraints, 87 Genetic algorithms, 112

active, 87 Global convergence

inactive, 87 Armijo rule, 43

Control variable, 10 BFGS“Armijo, 78

Convergence dogleg trust region, 52

r-type, 14 gradient projection method, 95

Convergence theorem projected BFGS method, 104

177

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

178 INDEX

scaled srojected gradient, 99 inexact Newton method, 29, 30

Global minimization problem, 3 Newton™s method, 16

Global minimizer, 3 Projected BFGS method, 104

Globally convergent algorithm, 39 Projected Newton method, 101

Gradient of f , 4 q-type, 13

Gradient Projection Local improvement, 18

algorithm, 91 Local linear model

Gradient projection, 91 for equations, 14

Local minimizer, 3

Hessian of f , 4 nondegenerate, 90

Hooke“Jeeves Local quadratic model

algorithm, 135, 145 exact Hessians, 15

convergence, 148

exploratory move, 145 MDS, 135, 143

pattern move, 146 convergence, 145

Measure of stationarity, 93

Implicit ¬ltering

Minimizer

restart, 127

global, 3

scales, 124

local, 3

Inde¬nite matrix, 4

Minimum, 3

Indices

Minimum at all scales, 127

active, 87

Minimum norm solution, 23, 25

inactive, 87

Model Hessian, 40

Inexact Newton

Moore“Penrose inverse, 26

method, 28

Multidirectional search, 135, 143

step, 28

convergence, 145

Initial

guess, 4

Necessary condition

iterate, 4

¬rst-order, 5

Inner iteration, 28

Necessary conditions

Interval methods, 112

bound constraints

one variable, 88

Krylov subspace, 8

¬rst-order

bound constraints, 88

Large residual problem, 22

for optimality, 5

Lennard“Jones function, 120

Negative curvature, 9, 30, 36

Levenberg“Marquardt

Nelder“Mead

method, 47

oriented restart, 141

parameter, 47, 56

simplex method, 135

Line search, 39

stagnation

actual reduction, 40

detection, 141

exact, 41, 86

Newtcg

predicted reduction, 40

algorithm, 32

Line search algorithm, 40

Newton step, 15

Linear model, 14, 40

Newton™s method, 14

Lipschitz

algorithm, 16

constant, 14

Newton-iterative methods, 28

continuity, 14

Nondegenerate local minimizer, 90

Local convergence

Nonlinear conjugate gradient

BFGS, 72

algorithm, 48

chord method, 19

implicit ¬ltering, 124 convergence, 49

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

INDEX 179

Fletcher“Reeves, 49 Secant method

Polak“Ribi` re, 49

e classical, 71

Nonlinear least squares, 11, 22 Sensitivity equations, 11, 12

Jacobian, 22 Shamanskii method, 20

overdetermined, 22 Sherman“Morrison

residual, 22 formula, 81

underdetermined, 22 Shubert algorithm, 149

Ntrust Simple decrease, 40

algorithm, 63 Simplex

condition, 112

Objective function, 3 convex hull, 112

Optimality de¬nition, 112

necessary conditions, 5 diameter, 112

suf¬cient conditions, 6 directions, 112

Optimization problem gradient

constrained, 3 forward, 113

easiest, 12 nonsingular, 112

unconstrained, 3 re¬‚ected, 115

Oriented lengths, 112 vertex, 112

Outer iteration, 28 Simplex gradient

central, 115

Parameter identi¬cation, 11 Simulated annealing, 112

Polak“Ribi` re formula, 49

e Singular value decomposition, 25

Positive de¬nite matrix, 4

Singular values, 26

Positive semide¬nite matrix, 4

Singular vectors, 26

Preconditioned CG

Small residual problem, 22

algorithm, 8

spd, 4

Preconditioning

Spendley“Hext“Himsworth algorithm, 159

conjugate gradient, 8

SR1 update, 81

Predicted reduction

Stagnation, 19

line search, 40

Nelder“Mead

trust region, 51

example, 139

Projected BFGS method, 104

Standard assumptions, 14

Projected Newton method, 100

State equation, 10

algorithm, 100

State variable, 10

Projection

Stationary point, 5, 43, 100

onto „¦, 89

bound constraints, 88

PSB update, 81

nondegenerate, 90

Steepest descent

Quadratic model, 6, 40

direction, 39

Quadratic objective functions, 6

method, 39

Quasi-Newton methods, 71

Stencil failure, 117, 123

Steplength, 39

Reduced Hessian

Stiff initial value problems, 11

de¬nition, 89

Strict complementarity, 90

Response surfaces, 148

Suf¬cient conditions

for optimality, 6

Safeguarding, 44

Suf¬cient decrease, 39, 41

Scaled gradient projection, 99

gradient projection, 91

Scaling, 46

projected Newton, 97

Scaling matrix, 99

Secant equation, 71 simplex form, 138

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.

Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.

180 INDEX

Surrogate models, 148

Taylor™s theorem, 5

Termination

bound constrained problems, 91

small function differences, 21

small gradients, 21

small steps, 21

unconstrained problems, 21

Tosca, Floria, xiv

TR-CG

algorithm, 64

Truncated Newton methods, 28

Trust region, 50

actual reduction, 51

algorithm

adjustment, 51

paradigm, 52

dogleg, 58

classical, 58

convergence theorem, 52

Levenberg“Marquardt

algorithm, 58

parameter, 57

methods, 50

Newton point, 58

predicted reduction, 51

problem, 50

radius, 50

trial solution, 50

trial step, 50

unidirectional, 54

Trust region CG

algorithm, 64

Unconstrained optimization problem, 3

Weber™s problem, 118, 127, 152

Wolfe conditions, 49

strong, 49

Youngman

Henny, 159

Zero residual problem, 22

Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.