OSCAR

Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR. Variable selection can be challenging, particularly in situations with a large number of predictors with possibly high correlations, such as gene expression data. In this article, a new method, called OSCAR (octagonal shrinkage and clustering algorithm for regression), is proposed to simultaneously select variables while grouping them into predictive clusters. In addition to improving prediction accuracy and interpretation, these resulting groups can then be investigated further to discover what contributes to the group having a similar behavior. The technique is based on penalized least squares with a geometrically intuitive penalty function that shrinks some coefficients to exactly zero. Additionally, this penalty yields exact equality of some coefficients, encouraging correlated predictors that have a similar effect on the response to form predictive clusters represented by a single coefficient. The proposed procedure is shown to compare favorably to the existing shrinkage and variable selection techniques in terms of both prediction error and model complexity, while yielding the additional grouping information.


References in zbMATH (referenced in 32 articles , 1 standard article )

Showing results 1 to 20 of 32.
Sorted by year (citations)

1 2 next

  1. Zhao, Shangwei; Ullah, Aman; Zhang, Xinyu: A class of model averaging estimators (2018)
  2. Alkenani, Ali; Dikheel, Tahir R.: Robust group identification and variable selection in regression (2017)
  3. Jeon, Jong-June; Kwon, Sunghoon; Choi, Hosik: Homogeneity detection for the high-dimensional generalized linear model (2017)
  4. Ke, Yuan; Li, Jialiang; Zhang, Wenyang: Structure identification in panel data analysis (2016)
  5. Nguyen, Tu Dinh; Tran, Truyen; Phung, Dinh; Venkatesh, Svetha: Graph-induced restricted Boltzmann machines for document modeling (2016)
  6. Su, Weijie; Candès, Emmanuel: SLOPE is adaptive to unknown sparsity and asymptotically minimax (2016)
  7. Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.: SLOPE-adaptive variable selection via convex optimization (2015)
  8. Jang, Woncheol; Lim, Johan; Lazar, Nicole A.; Loh, Ji Meng; Yu, Donghyeon: Some properties of generalized fused lasso and its applications to high dimensional data (2015)
  9. Liu, Fei; Chakraborty, Sounak; Li, Fan; Liu, Yan; Lozano, Aurelie C.: Bayesian regularization via graph Laplacian (2014)
  10. Narisetty, Naveen Naidu; He, Xuming: Bayesian variable selection with shrinking and diffusing priors (2014)
  11. Oiwa, Hidekazu; Matsushima, Shin; Nakagawa, Hiroshi: Feature-aware regularization for sparse online learning (2014)
  12. Wilson, Ander; Reich, Brian J.: Confounder selection via penalized credible regions (2014)
  13. Yao, Yonggang; Lee, Yoonkyung: Another look at linear programming for feature selection via methods of regularization (2014)
  14. Ahn, Mihye; Zhang, Hao Helen; Lu, Wenbin: Moment-based method for random effects selection in linear mixed models (2012)
  15. Bach, Francis; Jenatton, Rodolphe; Mairal, Julien; Obozinski, Guillaume: Structured sparsity through convex optimization (2012)
  16. Bondell, Howard D.; Reich, Brian J.: Consistent high-dimensional Bayesian variable selection via penalized credible regions (2012)
  17. Petry, Sebastian; Tutz, Gerhard: Shrinkage and variable selection by polytopes (2012)
  18. Shen, Xiaotong; Huang, Hsin-Cheng; Pan, Wei: Simultaneous supervised clustering and feature selection over a graph (2012)
  19. Vahid Nia; Anthony Davison: High-Dimensional Bayesian Clustering with Variable Selection: The R Package bclust (2012)
  20. Zeng, Lingmin; Xie, Jun: Group variable selection for data with dependent structures (2012)

1 2 next