A gradient descent algorithm for LASSO LASSO is a useful method to achieve the shrinkage and variable selection simultaneously. The main idea of LASSO is to use the L1 constraint in the regularization step. Starting from linear models, the idea of LASSO - using the L1 constraint, has been applied to various models such as wavelets, kernel machines, smoothing splines, multiclass logistic models etc.

References in zbMATH (referenced in 21 articles , 1 standard article )

Showing results 1 to 20 of 21.
Sorted by year (citations)

1 2 next

  1. Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy: The pre-image problem for Laplacian eigenmaps utilizing $L_1$ regularization with applications to data fusion (2017)
  2. Amato, Umberto; Antoniadis, Anestis; De Feis, Italia: Additive model selection (2016)
  3. Pillonetto, Gianluigi; Chen, Tianshi; Chiuso, Alessandro; De Nicolao, Giuseppe; Ljung, Lennart: Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint (2016)
  4. Zhao, Weihua; Zhang, Riquan: Variable selection of varying dispersion student-$t$ regression models (2015)
  5. Groll, Andreas; Tutz, Gerhard: Variable selection for generalized linear mixed models by $L_1$-penalized estimation (2014)
  6. Yu, WenBao; Chang, Yuan-chin Ivan; Park, Eunsik: A modified area under the ROC curve and its application to marker selection and classification (2014)
  7. Chin, Hui Han; Madry, Aleksander; Miller, Gary L.; Peng, Richard: Runtime guarantees for regression problems (2013)
  8. Neubauer, Jiří; Veselý, Vítězslav: Detection of multiple changes in mean by sparse parameter estimation (2013)
  9. Tutz, Gerhard; Petry, Sebastian: Nonparametric estimation of the link function including variable selection (2012)
  10. Wright, Stephen J.: Accelerated block-coordinate relaxation for regularized optimization (2012)
  11. Kwon, Sunghoon; Choi, Hosik; Kim, Yongdai: Quadratic approximation on SCAD penalized estimation (2011)
  12. Choi, Hosik; Kim, Jinseog; Kim, Yongdai: A sparse large margin semi-supervised learning method (2010)
  13. Yuan, Guo-Xun; Chang, Kai-Wei; Hsieh, Cho-Jui; Lin, Chih-Jen: A comparison of optimization methods and software for large-scale L1-regularized linear classification (2010)
  14. Cai, T.; Huang, J.; Tian, L.: Regularized estimation for the accelerated failure time model (2009)
  15. Daye, Z.John; Jeng, X.Jessie: Shrinkage and model selection with correlated variables via weighted fusion (2009)
  16. Martinussen, Torben; Sheike, Thomas H.: Covariate selection for the semiparametric additive risk model (2009)
  17. Hesterberg, Tim; Choi, Nam Hee; Meier, Lukas; Fraley, Chris: Least angle and $\ell _1$ penalized regression: a review (2008)
  18. Kim, Yongdai; Kim, Yuwon; Kim, Jinseog: A gradient descent algorithm for LASSO (2007)
  19. Liao, Lin; Fox, Dieter; Kautz, Henry: Hierarchical conditional random fields for GPS-based activity recognition (2007)
  20. Huang, Jian; Ma, Shuangge; Xie, Huiliang: Regularized estimation in the accelerated failure time model with high-dimensional covariates (2006)

1 2 next