LASSO

A gradient descent algorithm for LASSO LASSO is a useful method to achieve the shrinkage and variable selection simultaneously. The main idea of LASSO is to use the L1 constraint in the regularization step. Starting from linear models, the idea of LASSO - using the L1 constraint, has been applied to various models such as wavelets, kernel machines, smoothing splines, multiclass logistic models etc.


References in zbMATH (referenced in 24 articles , 1 standard article )

Showing results 1 to 20 of 24.
Sorted by year (citations)

1 2 next

  1. Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy: The pre-image problem for Laplacian eigenmaps utilizing $L_1$ regularization with applications to data fusion (2017)
  2. Amato, Umberto; Antoniadis, Anestis; De Feis, Italia: Additive model selection (2016)
  3. Pillonetto, Gianluigi; Chen, Tianshi; Chiuso, Alessandro; De Nicolao, Giuseppe; Ljung, Lennart: Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint (2016)
  4. Zhao, Weihua; Zhang, Riquan: Variable selection of varying dispersion student-$t$ regression models (2015)
  5. Groll, Andreas; Tutz, Gerhard: Variable selection for generalized linear mixed models by $L_1$-penalized estimation (2014)
  6. Yu, WenBao; Chang, Yuan-chin Ivan; Park, Eunsik: A modified area under the ROC curve and its application to marker selection and classification (2014)
  7. Chin, Hui Han; Madry, Aleksander; Miller, Gary L.; Peng, Richard: Runtime guarantees for regression problems (2013)
  8. Neubauer, Jiří; Veselý, Vítězslav: Detection of multiple changes in mean by sparse parameter estimation (2013)
  9. Tutz, Gerhard; Petry, Sebastian: Nonparametric estimation of the link function including variable selection (2012)
  10. Wright, Stephen J.: Accelerated block-coordinate relaxation for regularized optimization (2012)
  11. Choi, Hosik; Yeo, Donghwa; Kwon, Sunghoon; Kim, Yongdai: Gene selection and prediction for cancer classification using support vector machines with a reject option (2011)
  12. Kwon, Sunghoon; Choi, Hosik; Kim, Yongdai: Quadratic approximation on SCAD penalized estimation (2011)
  13. Choi, Hosik; Kim, Jinseog; Kim, Yongdai: A sparse large margin semi-supervised learning method (2010)
  14. Goeman, Jelle J.: $L_1$ penalized estimation in the Cox proportional hazards model (2010)
  15. Song, Xiao; Ma, Shuangge: Penalised variable selection with U-estimates (2010)
  16. Yuan, Guo-Xun; Chang, Kai-Wei; Hsieh, Cho-Jui; Lin, Chih-Jen: A comparison of optimization methods and software for large-scale L1-regularized linear classification (2010)
  17. Cai, T.; Huang, J.; Tian, L.: Regularized estimation for the accelerated failure time model (2009)
  18. Daye, Z. John; Jeng, X. Jessie: Shrinkage and model selection with correlated variables via weighted fusion (2009)
  19. Martinussen, Torben; Sheike, Thomas H.: Covariate selection for the semiparametric additive risk model (2009)
  20. Hesterberg, Tim; Choi, Nam Hee; Meier, Lukas; Fraley, Chris: Least angle and $\ell _1$ penalized regression: a review (2008)

1 2 next