Pegasos

Pegasos: primal estimated sub-gradient solver for SVM. We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ϵ is O (1/ϵ) , where each iteration operates on a single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ϵ2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is O (d/(λϵ)) , where d is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach also extends to non-linear kernels while working solely on the primal objective function, though in this case the runtime does depend linearly on the training set size. Our algorithm is particularly well suited for large text classification problems, where we demonstrate an order-of-magnitude speedup over previous SVM learning methods


References in zbMATH (referenced in 103 articles , 1 standard article )

Showing results 81 to 100 of 103.
Sorted by year (citations)
  1. Carrizosa, Emilio; Romero Morales, Dolores: Supervised classification and mathematical optimization (2013)
  2. Cocianu, Catalina-Lucia; State, Luminita; Mircea, Marinela; Vlamos, Panayiotis: A faster gradient ascent learning algorithm for nonlinear SVM (2013)
  3. Djuric, Nemanja; Lan, Liang; Vucetic, Slobodan; Wang, Zhuang: BudgetedSVM: a toolbox for scalable SVM approximations (2013)
  4. Jiang, Hui; Zhang, Bo: Dynamical memory control based on projection technique for online regression (2013)
  5. Lee, Sangkyun; Wright, Stephen J.: Stochastic subgradient estimation training for support vector machines (2013)
  6. Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei: Training Lp norm multiple kernel learning in the primal (2013)
  7. Sánchez, Jorge; Perronnin, Florent; Mensink, Thomas; Verbeek, Jakob: Image classification with the Fisher vector: theory and practice (2013)
  8. Zhou, Shuisheng; Cui, Jiangtao; Ye, Feng; Liu, Hongwei; Zhu, Qiang: New smoothing SVM algorithm with tight error bound and efficient reduced techniques (2013)
  9. Bshouty, Nader H.; Long, Philip M.: Linear classifiers are nearly optimal when hidden variables have diverse effects (2012)
  10. Ghadimi, Saeed; Lan, Guanghui: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. I: A generic algorithmic framework (2012)
  11. Miao, Xu; Rao, Rajesh P. N.: Fast structured prediction using large margin sigmoid belief networks (2012)
  12. Rodriguez-Lujan, Irene; Cruz, Carlos Santa; Huerta, Ramon: Hierarchical linear support vector machine (2012)
  13. Xu, Bo; Huang, Kaizhu; Liu, Cheng-Lin: Maxi-Min discriminant analysis via online learning (2012)
  14. Yugov, Vsevolod; Kumazawa, Itsuo: Online boosting algorithm based on two-phase SVM training (2012) ioport
  15. Shalev-Shwartz, Shai; Singer, Yoram; Srebro, Nathan; Cotter, Andrew: Pegasos: primal estimated sub-gradient solver for SVM (2011)
  16. Sutton, Charles; McCallum, Andrew: An introduction to conditional random fields (2011)
  17. Landwehr, Niels; Passerini, Andrea; De Raedt, Luc; Frasconi, Paolo: Fast learning of relational kernels (2010)
  18. Wang, Zhuang; Vucetic, Slobodan: Online training on a budget of support vector machines using twin prototypes (2010)
  19. Dinuzzo, Francesco; De Nicolao, Giuseppe: An algebraic characterization of the optimum of regularized kernel methods (2009)
  20. Hsu, Chun-Nan; Huang, Han-Shen; Chang, Yu-Ming; Lee, Yuh-Jye: Periodic step-size adaptation in second-order gradient descent for single-pass on-line structured learning (2009)