boost

R package boost: BagBoosting for tumor classification with gene expression data. Motivation: Microarray experiments are expected to contribute significantly to the progress in cancer treatment by enabling a precise and early diagnosis. They create a need for class prediction tools, which can deal with a large number of highly correlated input variables, perform feature selection and provide class probability estimates that serve as a quantification of the predictive uncertainty. A very promising solution is to combine the two ensemble schemes bagging and boosting to a novel algorithm called BagBoosting. Results: When bagging is used as a module in boosting, the resulting classifier consistently improves the predictive performance and the probability estimates of both bagging and boosting on real and simulated gene expression data. This quasi-guaranteed improvement can be obtained by simply making a bigger computing effort. The advantageous predictive potential is also confirmed by comparing BagBoosting to several established class prediction tools for microarray data. Availability: Software for the modified boosting algorithms, for benchmark studies and for the simulation of microarray data are available as an R package under GNU public license at http://stat.ethz.ch/ dettling/bagboost.html


References in zbMATH (referenced in 43 articles )

Showing results 1 to 20 of 43.
Sorted by year (citations)

1 2 3 next

  1. Anderlucci, Laura; Fortunato, Francesca; Montanari, Angela: High-dimensional clustering via random projections (2022)
  2. Askari, Armin; d’Aspremont, Alexandre; El Ghaoui, Laurent: Approximation bounds for sparse programs (2022)
  3. Nguyen, Viet Anh; Kuhn, Daniel; Esfahani, Peyman Mohajerin: Distributionally robust inverse covariance estimation: the Wasserstein shrinkage estimator (2022)
  4. Cai, T. Tony; Zhang, Linjun: A convex optimization approach to high-dimensional sparse quadratic discriminant analysis (2021)
  5. Cai, Jia; Huo, Junyi: Sparse generalized canonical correlation analysis via linearized Bregman method (2020)
  6. Chen, Huangyue; Kong, Lingchen; Li, Yan: A novel convex clustering method for high-dimensional data using semiproximal ADMM (2020)
  7. Chen, Huangyue; Kong, Lingchen; Shang, Pan; Pan, Shanshan: Safe feature screening rules for the regularized Huber regression (2020)
  8. Huo, Yanhao; Xin, Lihui; Kang, Chuanze; Wang, Minghui; Ma, Qin; Yu, Bin: SGL-SVM: a novel method for tumor classification via support vector machine with sparse group lasso (2020)
  9. Yang, Aijun; Tian, Yuzhu; Li, Yunxian; Lin, Jinguan: Sparse Bayesian variable selection in kernel probit model for analyzing high-dimensional data (2020)
  10. Yin, Zanhua: Variable selection for sparse logistic regression (2020)
  11. Yang, Aijun; Jiang, Xuejun; Shu, Lianjie; Liu, Pengfei: Sparse Bayesian kernel multinomial probit regression model for high-dimensional data classification (2019)
  12. Jiang, Binyan; Wang, Xiangyu; Leng, Chenlei: A direct approach for sparse quadratic discriminant analysis (2018)
  13. Arias-Castro, Ery; Pu, Xiao: A simple approach to sparse clustering (2017)
  14. Bertsimas, Dimitris; King, Angela; Mazumder, Rahul: Best subset selection via a modern optimization lens (2016)
  15. Cheng, Lulu; Kim, Inyoung; Pang, Herbert: Bayesian semiparametric model for pathway-based analysis with zero-inflated clinical outcomes (2016)
  16. Fan, Yan; Gai, Yujie; Zhu, Lixing: Asymtotics of Dantzig selector for a general single-index model (2016)
  17. Safo, Sandra E.; Ahn, Jeongyoun: General sparse multi-class linear discriminant analysis (2016)
  18. Ahn, Jeongyoun; Jeon, Yongho: Sparse HDLSS discrimination with constrained data piling (2015)
  19. Donoho, David; Jin, Jiashun: Higher criticism for large-scale inference, especially for rare and weak effects (2015)
  20. Müller, Patric; van de Geer, Sara: The partial linear model in high dimensions (2015)

1 2 3 next