OSCAR

Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR. Variable selection can be challenging, particularly in situations with a large number of predictors with possibly high correlations, such as gene expression data. In this article, a new method, called OSCAR (octagonal shrinkage and clustering algorithm for regression), is proposed to simultaneously select variables while grouping them into predictive clusters. In addition to improving prediction accuracy and interpretation, these resulting groups can then be investigated further to discover what contributes to the group having a similar behavior. The technique is based on penalized least squares with a geometrically intuitive penalty function that shrinks some coefficients to exactly zero. Additionally, this penalty yields exact equality of some coefficients, encouraging correlated predictors that have a similar effect on the response to form predictive clusters represented by a single coefficient. The proposed procedure is shown to compare favorably to the existing shrinkage and variable selection techniques in terms of both prediction error and model complexity, while yielding the additional grouping information.


References in zbMATH (referenced in 29 articles , 1 standard article )

Showing results 1 to 20 of 29.
Sorted by year (citations)

1 2 next

  1. Ke, Yuan; Li, Jialiang; Zhang, Wenyang: Structure identification in panel data analysis (2016)
  2. Nguyen, Tu Dinh; Tran, Truyen; Phung, Dinh; Venkatesh, Svetha: Graph-induced restricted Boltzmann machines for document modeling (2016)
  3. Su, Weijie; Candès, Emmanuel: SLOPE is adaptive to unknown sparsity and asymptotically minimax (2016)
  4. Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.: SLOPE-adaptive variable selection via convex optimization (2015)
  5. Jang, Woncheol; Lim, Johan; Lazar, Nicole A.; Loh, Ji Meng; Yu, Donghyeon: Some properties of generalized fused lasso and its applications to high dimensional data (2015)
  6. Liu, Fei; Chakraborty, Sounak; Li, Fan; Liu, Yan; Lozano, Aurelie C.: Bayesian regularization via graph Laplacian (2014)
  7. Narisetty, Naveen Naidu; He, Xuming: Bayesian variable selection with shrinking and diffusing priors (2014)
  8. Oiwa, Hidekazu; Matsushima, Shin; Nakagawa, Hiroshi: Feature-aware regularization for sparse online learning (2014)
  9. Wilson, Ander; Reich, Brian J.: Confounder selection via penalized credible regions (2014)
  10. Yao, Yonggang; Lee, Yoonkyung: Another look at linear programming for feature selection via methods of regularization (2014)
  11. Ahn, Mihye; Zhang, Hao Helen; Lu, Wenbin: Moment-based method for random effects selection in linear mixed models (2012)
  12. Bach, Francis; Jenatton, Rodolphe; Mairal, Julien; Obozinski, Guillaume: Structured sparsity through convex optimization (2012)
  13. Bondell, Howard D.; Reich, Brian J.: Consistent high-dimensional Bayesian variable selection via penalized credible regions (2012)
  14. Petry, Sebastian; Tutz, Gerhard: Shrinkage and variable selection by polytopes (2012)
  15. Shen, Xiaotong; Huang, Hsin-Cheng; Pan, Wei: Simultaneous supervised clustering and feature selection over a graph (2012)
  16. Vahid Nia; Anthony Davison: High-Dimensional Bayesian Clustering with Variable Selection: The R Package bclust (2012)
  17. Zeng, Lingmin; Xie, Jun: Group variable selection for data with dependent structures (2012)
  18. Binder, Harald; Porzelius, Christine; Schumacher, Martin: An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models (2011)
  19. Ghosh, Samiran: On the grouped selection and model complexity of the adaptive elastic net (2011)
  20. Huang, Jian; Ma, Shuangge; Li, Hongzhe; Zhang, Cun-Hui: The sparse Laplacian shrinkage estimator for high-dimensional regression (2011)

1 2 next