Iterative methods for optimization This book gives an introduction to optimization methods for unconstrained and bound constrained minimization problems. The style of the book is probably best described by the following quote from the book’s preface: `dots{} we treat a small number of methods in depth, giving less detailed description of only a few [dots{}]. We aim for clarity and brevity rather than complete generality and confine our scope to algorithms that are easy to implement (by the reader!) and understand.’ par This book is partitioned into two parts. The first part, occupying approximately 100 pages, is devoted to the optimization of smooth functions. The methods studied in this first part rely on the availability and accuracy of first order, and sometimes also second order derivatives of the objective function. The first part contains five chapters. The first chapter provides basic concepts. It also introduces a parameter identification problem and a discretized optimal control problem, both of which are used to demonstrate all methods discussed in the first part. Chapter 2 studies the local convergence of Newton’s method, inexact Newton methods, and the Gauss-Newton method for the solution of nonlinear least squares problems. Both, overdetermined and underdetermined nonlinear least squares problems are considered. Chapter 3 is devoted to line-search and trust-region methods, which are used to globalize convergence, i.e., remove the restriction that the starting point of the optimization iteration is sufficiently close to a solution. par The BFGS method is studied in chapter 4. A local convergence analysis is provided and implementation details are discussed. Other quasi-Newton methods are sketched. The last chapter of the first part, chapter 5, studies projection methods for the solution of bound constrained problems. All chapters conclude with a demonstration of the methods discussed in the respective chapter using the parameter identification problem and the discretized optimal control problem introduced in chapter 1, and with a set of exercises. par The second part of the book, which is approximately 50 pages long, deals with the optimization of noisy functions. Such optimization problems arise, e.g., when the evaluation of the objective function involves computer simulations. In such cases the noise often introduces artificial minimizers. Gradient information, even if available, cannot expected to be reliable. This second part contains three chapters. The first chapter provides a discussion of noisy functions, basics concepts, and three simple examples that are later used to demonstrate the behavior of optimization algorithms. Chapter 7 introduces implicit filtering, a technique due to the author and his group. Implicit filtering methods use finite difference approximations of the gradient, which are adjusted to the noise level in the function. Direct search algorithms, including the Nelder-Mead, multidirectional search, and the Hooke-Jeves algorithms are discussed in Chapter 8. Again, the latter two chapters conclude with a numerical demonstration of the methods discussed in the respective chapter, and with a set of exercises. par The treatment of both, optimization methods for smooth and for noisy functions is a unique feature of this book. Matlab implementations of all algorithms discussed in this book are easily accessible from the author’s or the publisher’s web-page.

References in zbMATH (referenced in 554 articles )

Showing results 1 to 20 of 554.
Sorted by year (citations)

1 2 3 ... 26 27 28 next

  1. Lennon, Hannah; Yuan, Jingsong: Estimation of a digitised Gaussian ARMA model by Monte Carlo expectation maximisation (2019)
  2. Olsen, Christian Haargaard; Ottesen, Johnny T.; Smith, Ralph C.; Olufsen, Mette S.: Parameter subset selection techniques for problems in mathematical biology (2019)
  3. Qiu, Zhiping; Peng, Limin; Manatunga, Amita; Guo, Ying: A smooth nonparametric approach to determining cut-points of a continuous scale (2019)
  4. Vervliet, Nico; Debals, Otto; De Lathauwer, Lieven: Exploiting efficient representations in large-scale tensor decompositions (2019)
  5. Antil, Harbir; Nochetto, Ricardo H.; Venegas, Pablo: Optimizing the Kelvin force in a moving target subdomain (2018)
  6. Bellavia, Stefania; Gratton, Serge; Riccietti, Elisa: A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients (2018)
  7. Brady, R.; Frank-Ito, D. O.; Tran, H. T.; Janum, S.; Møller, K.; Brix, S.; Ottesen, J. T.; Mehlsen, J.; Olufsen, M. S.: Personalized mathematical model of endotoxin-induced inflammatory responses in young men and associated changes in heart rate variability (2018)
  8. Chen, Xiaojun; Kelley, C. T.; Xu, Fengmin; Zhang, Zaikun: A smoothing direct search method for Monte Carlo-based bound constrained composite nonsmooth optimization (2018)
  9. Hare, W.; Planiden, C.: Computing proximal points of convex functions with inexact subgradients (2018)
  10. Hinze, Michael; Kaltenbacher, Barbara; Quyen, Tran Nhan Tam: Identifying conductivity in electrical impedance tomography with total variation regularization (2018)
  11. Maggiar, Alvaro; Wächter, Andreas; Dolinskaya, Irina S.; Staum, Jeremy: A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling (2018)
  12. Marquis, Andrew D.; Arnold, Andrea; Dean-Bernhoft, Caron; Carlson, Brian E.; Olufsen, Mette S.: Practical identifiability and uncertainty quantification of a pulsatile cardiovascular model (2018)
  13. Nikolovski, Filip; Stojkovska, Irena: Complex-step derivative approximation in noisy environment (2018)
  14. Quyen, Tran Nhan Tam: Variational method for multiple parameter identification in elliptic PDEs (2018)
  15. Salehi Shayegan, A. H.; Zakeri, A.: A numerical method for determining a quasi solution of a backward time-fractional diffusion equation (2018)
  16. Al-Baali, Mehiddin; Caliciotti, Andrea; Fasano, Giovanni; Roma, Massimo: Exploiting damped techniques for nonlinear conjugate gradient methods (2017)
  17. Blank, Luise; Rupprecht, Christoph: An extension of the projected gradient method to a Banach space setting with application in structural topology optimization (2017)
  18. Caliari, M.; Zuccher, S.: Quasi-Newton minimization for the (p(x))-Laplacian problem (2017)
  19. Caudillo-Mata, L. A.; Haber, E.; Heagy, L. J.; Schwarzbach, C.: A framework for the upscaling of the electrical conductivity in the quasi-static Maxwell’s equations (2017)
  20. González-Andrade, Sergio: A preconditioned descent algorithm for variational inequalities of the second kind involving the (p)-Laplacian operator (2017)

1 2 3 ... 26 27 28 next