Pegasos

Pegasos: primal estimated sub-gradient solver for SVM. We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ϵ is O (1/ϵ) , where each iteration operates on a single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ϵ2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is O (d/(λϵ)) , where d is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach also extends to non-linear kernels while working solely on the primal objective function, though in this case the runtime does depend linearly on the training set size. Our algorithm is particularly well suited for large text classification problems, where we demonstrate an order-of-magnitude speedup over previous SVM learning methods


References in zbMATH (referenced in 57 articles , 1 standard article )

Showing results 1 to 20 of 57.
Sorted by year (citations)

1 2 3 next

  1. Bottou, Léon; Curtis, Frank E.; Nocedal, Jorge: Optimization methods for large-scale machine learning (2018)
  2. Piccialli, Veronica; Sciandrone, Marco: Nonlinear optimization and support vector machines (2018)
  3. Tappenden, Rachael; Takáč, Martin; Richtárik, Peter: On the complexity of parallel coordinate descent (2018)
  4. van Rijn, Jan N.; Holmes, Geoffrey; Pfahringer, Bernhard; Vanschoren, Joaquin: The online performance estimation framework: heterogeneous ensemble learning for data streams (2018)
  5. Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.: Weighted SGD for $\ell_p$ regression with randomized preconditioning (2018)
  6. Amina Asif, Wajid Arshad Abbasi, Farzeen Munir, Asa Ben-Hur, Fayyaz ul Amir Afsar Minhas: pyLEMMINGS: Large Margin Multiple Instance Classification and Ranking for Bioinformatics Applications (2017) arXiv
  7. Couellan, Nicolas; Wang, Wenjuan: Uncertainty-safe large scale support vector machines (2017)
  8. Le, Trung; Nguyen, Tu Dinh; Nguyen, Vu; Phung, Dinh: Approximation vector machines for large-scale online learning (2017)
  9. Norton, Matthew; Mafusalov, Alexander; Uryasev, Stan: Soft margin support vector classification as buffered probability minimization (2017)
  10. Rachkovskij, D. A.: Binary vectors for fast distance and similarity estimation (2017)
  11. Schmidt, Mark; Le Roux, Nicolas; Bach, Francis: Minimizing finite sums with the stochastic average gradient (2017)
  12. Wang, Ximing; Fan, Neng; Pardalos, Panos M.: Stochastic subgradient descent method for large-scale robust chance-constrained support vector machines (2017)
  13. Gao, Wei; Wang, Lu; Jin, Rong; Zhu, Shenghuo; Zhou, Zhi-Hua: One-pass AUC optimization (2016)
  14. Liu, Fengqiu; Xue, Xiaoping: Subgradient-based neural network for nonconvex optimization problems in support vector machines with indefinite kernels (2016)
  15. Lu, Jing; Hoi, Steven C. H.; Wang, Jialei; Zhao, Peilin; Liu, Zhi-Yong: Large scale online kernel learning (2016)
  16. Peña, Javier; Soheili, Negar: A deterministic rescaled perceptron algorithm (2016)
  17. Richtárik, Peter; Takáč, Martin: Distributed coordinate descent method for learning with big data (2016)
  18. Rosasco, Lorenzo; Villa, Silvia; Vũ, Bang C^ong: Stochastic forward-backward splitting for monotone inclusions (2016)
  19. Shalev-Shwartz, Shai; Zhang, Tong: Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization (2016)
  20. Yang, Tianbao; Jin, Rong; Zhu, Shenghuo; Lin, Qihang: On data preconditioning for regularized loss minimization (2016)

1 2 3 next