Saga

SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives. In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.


References in zbMATH (referenced in 89 articles )

Showing results 1 to 20 of 89.
Sorted by year (citations)

1 2 3 4 5 next

  1. Belomestny, Denis; Iosipoi, Leonid; Moulines, Eric; Naumov, Alexey; Samsonov, Sergey: Variance reduction for dependent sequences with applications to stochastic gradient MCMC (2021)
  2. Bian, Fengmiao; Liang, Jingwei; Zhang, Xiaoqun: A stochastic alternating direction method of multipliers for non-smooth and non-convex optimization (2021)
  3. Chen, Chenxi; Chen, Yunmei; Ye, Xiaojing: A randomized incremental primal-dual method for decentralized consensus optimization (2021)
  4. Cui, Shisheng; Shanbhag, Uday V.: On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems (2021)
  5. Duchi, John C.; Glynn, Peter W.; Namkoong, Hongseok: Statistics of robust optimization: a generalized empirical likelihood approach (2021)
  6. Duchi, John C.; Ruan, Feng: Asymptotic optimality in stochastic optimization (2021)
  7. Gower, Robert M.; Richtárik, Peter; Bach, Francis: Stochastic quasi-gradient methods: variance reduction via Jacobian sketching (2021)
  8. Gürbüzbalaban, M.; Ozdaglar, A.; Parrilo, P. A.: Why random reshuffling beats stochastic gradient descent (2021)
  9. Hanzely, Filip; Richtárik, Peter: Fastest rates for stochastic mirror descent methods (2021)
  10. Hu, Bin; Seiler, Peter; Lessard, Laurent: Analysis of biased stochastic gradient descent using sequential semidefinite programs (2021)
  11. Lu, Haihao; Freund, Robert M.: Generalized stochastic Frank-Wolfe algorithm with stochastic “substitute” gradient for structured convex optimization (2021)
  12. Martin, Matthieu; Nobile, Fabio: PDE-constrained optimal control problems with uncertain parameters using SAGA (2021)
  13. Nguyen, Lam M.; Scheinberg, Katya; Takáč, Martin: Inexact SARAH algorithm for stochastic optimization (2021)
  14. Qian, Xun; Qu, Zheng; Richtárik, Peter: L-SVRG and L-Katyusha with arbitrary sampling (2021)
  15. Tuckute, Greta; Hansen, Sofie Therese; Kjaer, Troels Wesenberg; Hansen, Lars Kai: Real-time decoding of attentional states using closed-loop EEG neurofeedback (2021)
  16. Xiao, Xiantao: A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods (2021)
  17. Yu, Tengteng; Liu, Xin-Wei; Dai, Yu-Hong; Sun, Jie: Stochastic variance reduced gradient methods using a trust-region-like scheme (2021)
  18. Zhang, Junyu; Xiao, Lin: Multilevel composite stochastic optimization via nested variance reduction (2021)
  19. Aravkin, Aleksandr; Davis, Damek: Trimmed statistical estimation via variance reduction (2020)
  20. Boffi, Nicholas M.; Slotine, Jean-Jacques E.: A continuous-time analysis of distributed stochastic gradient (2020)

1 2 3 4 5 next