A gradient descent algorithm for LASSO LASSO is a useful method to achieve the shrinkage and variable selection simultaneously. The main idea of LASSO is to use the L1 constraint in the regularization step. Starting from linear models, the idea of LASSO - using the L1 constraint, has been applied to various models such as wavelets, kernel machines, smoothing splines, multiclass logistic models etc.

References in zbMATH (referenced in 26 articles , 1 standard article )

Showing results 1 to 20 of 26.
Sorted by year (citations)

1 2 next

  1. Yuan, Xiao-Tong; Li, Ping; Zhang, Tong: Gradient hard thresholding pursuit (2018)
  2. Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy: The pre-image problem for Laplacian eigenmaps utilizing $L_1$ regularization with applications to data fusion (2017)
  3. Amato, Umberto; Antoniadis, Anestis; De Feis, Italia: Additive model selection (2016)
  4. Lee, Sangin; Kwon, Sunghoon; Kim, Yongdai: A modified local quadratic approximation algorithm for penalized optimization problems (2016)
  5. Pillonetto, Gianluigi; Chen, Tianshi; Chiuso, Alessandro; De Nicolao, Giuseppe; Ljung, Lennart: Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint (2016)
  6. Zhao, Weihua; Zhang, Riquan: Variable selection of varying dispersion student-$t$ regression models (2015)
  7. Groll, Andreas; Tutz, Gerhard: Variable selection for generalized linear mixed models by $L_1$-penalized estimation (2014)
  8. Yu, WenBao; Chang, Yuan-chin Ivan; Park, Eunsik: A modified area under the ROC curve and its application to marker selection and classification (2014)
  9. Chin, Hui Han; Madry, Aleksander; Miller, Gary L.; Peng, Richard: Runtime guarantees for regression problems (2013)
  10. Neubauer, Jiří; Veselý, Vítězslav: Detection of multiple changes in mean by sparse parameter estimation (2013)
  11. Tutz, Gerhard; Petry, Sebastian: Nonparametric estimation of the link function including variable selection (2012)
  12. Wright, Stephen J.: Accelerated block-coordinate relaxation for regularized optimization (2012)
  13. Choi, Hosik; Yeo, Donghwa; Kwon, Sunghoon; Kim, Yongdai: Gene selection and prediction for cancer classification using support vector machines with a reject option (2011)
  14. Kwon, Sunghoon; Choi, Hosik; Kim, Yongdai: Quadratic approximation on SCAD penalized estimation (2011)
  15. Choi, Hosik; Kim, Jinseog; Kim, Yongdai: A sparse large margin semi-supervised learning method (2010)
  16. Goeman, Jelle J.: $L_1$ penalized estimation in the Cox proportional hazards model (2010)
  17. Song, Xiao; Ma, Shuangge: Penalised variable selection with U-estimates (2010)
  18. Yuan, Guo-Xun; Chang, Kai-Wei; Hsieh, Cho-Jui; Lin, Chih-Jen: A comparison of optimization methods and software for large-scale L1-regularized linear classification (2010)
  19. Cai, T.; Huang, J.; Tian, L.: Regularized estimation for the accelerated failure time model (2009)
  20. Daye, Z. John; Jeng, X. Jessie: Shrinkage and model selection with correlated variables via weighted fusion (2009)

1 2 next