ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization. We consider unconstrained optimization problems where only “stochastic” estimates of the objective function are observable as replicates from a Monte Carlo oracle. The Monte Carlo oracle is assumed to provide no direct observations of the function gradient. We present ASTRO-DF -- a class of derivative-free trust-region algorithms, where a stochastic local model is constructed, optimized, and updated iteratively. Function estimation and model construction within ASTRO-DF is {it adaptive} in the sense that the extent of Monte Carlo sampling is determined by continuously monitoring and balancing measures of sampling error (or variance) and structural error (or model bias) within ASTRO-DF. Such balancing of errors is designed to ensure that Monte Carlo effort within ASTRO-DF is sensitive to algorithm trajectory: sampling is higher whenever an iterate is inferred to be close to a critical point and lower when far away. We demonstrate the almost sure convergence of ASTRO-DF’s iterates to first-order critical points when using stochastic polynomial interpolation models. The question of using more complicated models, e.g., regression or stochastic kriging, in combination with adaptive sampling is worth further investigation and will benefit from the methods of proof presented here. We speculate that ASTRO-DF’s iterates achieve the canonical Monte Carlo convergence rate, although a proof remains elusive.

References in zbMATH (referenced in 11 articles , 1 standard article )

Showing results 1 to 11 of 11.
Sorted by year (citations)

  1. Audet, Charles; Dzahini, Kwassi Joseph; Kokkolaras, Michael; Le Digabel, Sébastien: Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates (2021)
  2. Liu, Yang; Roosta, Fred: Convergence of Newton-MR under inexact Hessian information (2021)
  3. Pasupathy, Raghu; Song, Yongjia: Adaptive sequential sample average approximation for solving two-stage stochastic linear programs (2021)
  4. Hare, Warren: A discussion on variational analysis in derivative-free optimization (2020)
  5. Pedrielli, Giulia; Wang, Songhao; Ng, Szu Hui: An extended two-stage sequential optimization approach: properties and performance (2020)
  6. Xu, Peng; Roosta, Fred; Mahoney, Michael W.: Newton-type methods for non-convex optimization under inexact Hessian information (2020)
  7. Jalilzadeh, Afrooz; Lei, Jinlong; Shanbhag, Uday V.: Open problem: Iterative schemes for stochastic optimization: convergence statements and limit theorems (2019)
  8. Larson, Jeffrey; Menickelly, Matt; Wild, Stefan M.: Derivative-free optimization methods (2019)
  9. Chen, R.; Menickelly, M.; Scheinberg, K.: Stochastic optimization using a trust-region method and random models (2018)
  10. Maggiar, Alvaro; Wächter, Andreas; Dolinskaya, Irina S.; Staum, Jeremy: A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling (2018)
  11. Shashaani, Sara; Hashemi, Fatemeh S.; Pasupathy, Raghu: ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization (2018)