DFO

DFO is a Fortran package for solving general nonlinear optimization problems that have the following characteristics: they are relatively small scale (less than 100 variables), their objective function is relatively expensive to compute and derivatives of such functions are not available and cannot be estimated efficiently. There also may be some noise in the function evaluation procedures. Such optimization problems arise ,for example, in engineering design, where the objective function evaluation is a simulation package treated as a black box.


References in zbMATH (referenced in 105 articles , 1 standard article )

Showing results 1 to 20 of 105.
Sorted by year (citations)

1 2 3 4 5 6 next

  1. Echebest, N.; Schuverdt, M.L.; Vignau, R.P.: An inexact restoration derivative-free filter method for nonlinear programming (2017)
  2. Fang, Xiaowei; Ni, Qin: A frame-based conjugate gradients direct search method with radial basis function interpolation model (2017)
  3. Tenne, Yoel: Machine-learning in optimization of expensive black-box functions (2017)
  4. Cauwet, Marie-Liesse; Liu, Jialin; Rozière, Baptiste; Teytaud, Olivier: Algorithm portfolios for noisy optimization (2016)
  5. Garmanjani, R.; Júdice, D.; Vicente, L.N.: Trust-region methods without using derivatives: worst case complexity and the nonsmooth case (2016)
  6. Lazar, Markus; Jarre, Florian: Calibration by optimization without using derivatives (2016)
  7. Tröltzsch, Anke: A sequential quadratic programming algorithm for equality-constrained optimization without derivatives (2016)
  8. Wang, Jueyu; Zhu, Detong: Conjugate gradient path method without line search technique for derivative-free unconstrained optimization (2016)
  9. Audet, Charles; Le Digabel, Sébastien; Peyrega, Mathilde: Linear equalities in blackbox optimization (2015)
  10. Ferreira, Priscila S.; Karas, Elizabeth W.; Sachine, Mael: A globally convergent trust-region algorithm for unconstrained derivative-free optimization (2015)
  11. Lv, Wei; Sun, Qiang; Lin, He; Sui, Ruirui: A penalty derivative-free algorithm for nonlinear constrained optimization (2015)
  12. Newby, Eric; Ali, M.M.: A trust-region-based derivative free algorithm for mixed integer programming (2015)
  13. Sampaio, Ph.R.; Toint, Ph.L.: A derivative-free trust-funnel method for equality-constrained nonlinear optimization (2015)
  14. Tenne, Yoel: An adaptive-topology ensemble algorithm for engineering optimization problems (2015)
  15. Yuan, Jinyun; Sampaio, Raimundo; Sun, Wenyu; Zhang, Liang: A wedge trust region method with self-correcting geometry for derivative-free optimization (2015)
  16. Yuan, Ya-xiang: Recent advances in trust region algorithms (2015)
  17. Zhou, Weijun: A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications (2015)
  18. Ng, Leo W.T.; Willcox, Karen E.: Multifidelity approaches for optimization under uncertainty (2014)
  19. Teytaud, Fabien; Teytaud, Olivier: Convergence rates of evolutionary algorithms and parallel evolutionary algorithms (2014)
  20. Xue, Dan; Sun, Wenyu: On convergence analysis of a derivative-free trust region algorithm for constrained optimization with separable structure (2014)

1 2 3 4 5 6 next