SGD-QN

SGD-QN: careful quasi-Newton stochastic gradient descent. The SGD-QN algorithm is a stochastic gradient descent algorithm that makes careful use of second-order information and splits the parameter update into independently scheduled components. Thanks to this design, SGD-QN iterates nearly as fast as a first-order stochastic gradient descent but requires less iterations to achieve the same accuracy. This algorithm won the “wild track” of the first PASCAL large scale learning challenge.


References in zbMATH (referenced in 23 articles )

Showing results 1 to 20 of 23.
Sorted by year (citations)

1 2 next

  1. Berahas, Albert S.; Takáč, Martin: A robust multi-batch L-BFGS method for machine learning (2020)
  2. Mokhtari, Aryan; Koppel, Alec; Takac, Martin; Ribeiro, Alejandro: A class of parallel doubly stochastic algorithms for large-scale learning (2020)
  3. Yousefian, Farzad; Nedić, Angelia; Shanbhag, Uday V.: On stochastic and deterministic quasi-Newton methods for nonstrongly convex optimization: asymptotic convergence and rate analysis (2020)
  4. Milzarek, Andre; Xiao, Xiantao; Cen, Shicong; Wen, Zaiwen; Ulbrich, Michael: A stochastic semismooth Newton method for nonsmooth nonconvex optimization (2019)
  5. Wang, Xiaoyu; Wang, Xiao; Yuan, Ya-Xiang: Stochastic proximal quasi-Newton methods for non-convex composite optimization (2019)
  6. Bottou, Léon; Curtis, Frank E.; Nocedal, Jorge: Optimization methods for large-scale machine learning (2018)
  7. Lakshmanan, K.; Bhatnagar, Shalabh: Quasi-Newton smoothed functional algorithms for unconstrained and constrained simulation optimization (2017)
  8. Pilanci, Mert; Wainwright, Martin J.: Newton sketch: a near linear-time optimization algorithm with linear-quadratic convergence (2017)
  9. Schmidt, Mark; Le Roux, Nicolas; Bach, Francis: Minimizing finite sums with the stochastic average gradient (2017)
  10. Wang, Xiao; Ma, Shiqian; Goldfarb, Donald; Liu, Wei: Stochastic quasi-Newton methods for nonconvex stochastic optimization (2017)
  11. Wang, Ximing; Fan, Neng; Pardalos, Panos M.: Stochastic subgradient descent method for large-scale robust chance-constrained support vector machines (2017)
  12. Wawrzyński, Paweł: ASD+M: automatic parameter tuning in stochastic optimization and on-line learning (2017)
  13. Byrd, R. H.; Hansen, S. L.; Nocedal, Jorge; Singer, Y.: A stochastic quasi-Newton method for large-scale optimization (2016)
  14. Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua: A fast SVD-hidden-nodes based extreme learning machine for large-scale data analytics (2016)
  15. Patel, Vivak: Kalman-based stochastic gradient method with stop condition and insensitivity to conditioning (2016)
  16. Gürbüzbalaban, M.; Ozdaglar, A.; Parrilo, P.: A globally convergent incremental Newton method (2015)
  17. Sopyła, Krzysztof; Drozda, Paweł: Stochastic gradient descent with Barzilai-Borwein update step for SVM (2015)
  18. Toulis, Panos; Airoldi, Edoardo M.: Scalable estimation strategies based on stochastic approximations: classical results and new insights (2015)
  19. Jing, Xingjian: Robust adaptive learning of feedforward neural networks via LMI optimizations (2012)
  20. Kim, Youngsung; Toh, Kar-Ann; Teoh, Andrew Beng Jin; Eng, How-Lung; Yau, Wei-Yun: An online AUC formulation for binary classification (2012)

1 2 next