Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems. Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box ”batch” problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.

References in zbMATH (referenced in 18 articles )

Showing results 1 to 18 of 18.
Sorted by year (citations)

  1. Gower, Robert M.; Richtárik, Peter; Bach, Francis: Stochastic quasi-gradient methods: variance reduction via Jacobian sketching (2021)
  2. Hu, Bin; Seiler, Peter; Lessard, Laurent: Analysis of biased stochastic gradient descent using sequential semidefinite programs (2021)
  3. Tatarenko, Tatiana; Nedich, Angelia: A smooth inexact penalty reformulation of convex problems with linear constraints (2021)
  4. Kulunchakov, Andrei; Mairal, Julien: Estimate sequences for stochastic composite optimization: variance reduction, acceleration, and robustness to noise (2020)
  5. Le Thi, Hoai An; Le, Hoai Minh; Phan, Duy Nhat; Tran, Bach: Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression (2020)
  6. Mishchenko, Konstantin; Iutzeler, Franck; Malick, Jérôme: A distributed flexible delay-tolerant proximal gradient algorithm (2020)
  7. Park, Youngsuk; Ryu, Ernest K.: Linear convergence of cyclic SAGA (2020)
  8. Zhou, Dongruo; Xu, Pan; Gu, Quanquan: Stochastic nested variance reduction for nonconvex optimization (2020)
  9. Lin, Hongzhou; Mairal, Julien; Harchaoui, Zaid: An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration (2019)
  10. Luo, Zhijian; Qian, Yuntao: Stochastic sub-sampled Newton method with variance reduction (2019)
  11. Briceño-Arias, Luis M.; Davis, Damek: Forward-backward-half forward algorithm for solving monotone inclusions (2018)
  12. Lin, Hongzhou; Mairal, Julien; Harchaoui, Zaid: Catalyst acceleration for first-order convex optimization: from theory to practice (2018)
  13. Mokhtari, Aryan; Eisen, Mark; Ribeiro, Alejandro: IQN: an incremental quasi-Newton method with local superlinear convergence rate (2018)
  14. Mokhtari, Aryan; Gürbüzbalaban, Mert; Ribeiro, Alejandro: Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate (2018)
  15. Vanli, N. D.; Gürbüzbalaban, Mert; Ozdaglar, A.: Global convergence rate of proximal incremental aggregated gradient methods (2018)
  16. Cheung, Yiu-ming; Lou, Jian: Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization (2017)
  17. Lee, Jason D.; Lin, Qihang; Ma, Tengyu; Yang, Tianbao: Distributed stochastic variance reduced gradient methods by sampling extra data with replacement (2017)
  18. Mairal, Julien: Incremental majorization-minimization optimization with application to large-scale machine learning (2015)