Pegasos

Pegasos: primal estimated sub-gradient solver for SVM. We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ϵ is O (1/ϵ) , where each iteration operates on a single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ϵ2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is O (d/(λϵ)) , where d is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach also extends to non-linear kernels while working solely on the primal objective function, though in this case the runtime does depend linearly on the training set size. Our algorithm is particularly well suited for large text classification problems, where we demonstrate an order-of-magnitude speedup over previous SVM learning methods


References in zbMATH (referenced in 103 articles , 1 standard article )

Showing results 21 to 40 of 103.
Sorted by year (citations)
  1. Nguyen, Bac; Ferri, Francesc J.; Morell, Carlos; De Baets, Bernard: An efficient method for clustered multi-metric learning (2019)
  2. Samareh, Aven; Parizi, Mahshid Salemi: How effectively train large-scale machine learning models? (2019)
  3. Shahparast, Homeira; Mansoori, Eghbal G.: Developing an online general type-2 fuzzy classifier using evolving type-1 rules (2019)
  4. Aggarwal, Charu C.: Neural networks and deep learning. A textbook (2018)
  5. Aggarwal, Charu C.: Machine learning for text (2018)
  6. Bottou, Léon; Curtis, Frank E.; Nocedal, Jorge: Optimization methods for large-scale machine learning (2018)
  7. Csiba, Dominik; Richtárik, Peter: Importance sampling for minibatches (2018)
  8. Horn, Daniel; Demircioğlu, Aydın; Bischl, Bernd; Glasmachers, Tobias; Weihs, Claus: A comparative study on large scale kernelized support vector machines (2018)
  9. Huang, Lingxiao; Jin, Yifei; Li, Jian: SVM via saddle point optimization: new bounds and distributed algorithms (2018)
  10. Lei, Yunwen; Shi, Lei; Guo, Zheng-Chu: Convergence of unregularized online learning algorithms (2018)
  11. Liu, Ying; Xu, Zhen; Li, Chunguang: Distributed online semi-supervised support vector machine (2018)
  12. Manno, Andrea; Palagi, Laura; Sagratella, Simone: Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training (2018)
  13. Nguyen, Hien D.; Jones, Andrew T.; McLachlan, Geoffrey J.: Stream-suitable optimization algorithms for some soft-margin support vector machine variants (2018)
  14. Piccialli, Veronica; Sciandrone, Marco: Nonlinear optimization and support vector machines (2018)
  15. Tappenden, Rachael; Takáč, Martin; Richtárik, Peter: On the complexity of parallel coordinate descent (2018)
  16. Van der Laan, Mark J.; Rose, Sherri: Targeted learning in data science. Causal inference for complex longitudinal studies (2018)
  17. van Rijn, Jan N.; Holmes, Geoffrey; Pfahringer, Bernhard; Vanschoren, Joaquin: The online performance estimation framework: heterogeneous ensemble learning for data streams (2018)
  18. Wang, Zhen; Shao, Yuan-Hai; Bai, Lan; Li, Chun-Na; Liu, Li-Ming; Deng, Nai-Yang: Insensitive stochastic gradient twin support vector machines for large scale problems (2018)
  19. Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.: Weighted SGD for (\ell_p) regression with randomized preconditioning (2018)
  20. Zhang, Suofei; Xing, Lingzhi; Zhou, Lin; Sun, Zhixin: Object tracking by incremental structural learning of deformable parts (2018)