AdaCost: Misclassification cost-sensitive boosting. AdaCost, a variant of AdaBoost, is a misclassification cost-sensitive boosting method. It uses the cost of misclassifications to update the training distribution on successive boosting rounds. The purpose is to reduce the cumulative misclassification cost more than AdaBoost. We formally show that AdaCost reduces the upper bound of cumulative misclassification cost of the training set. Empirical evaluations have shown significant reduction in the cumulative misclassification cost over AdaBoost without consuming additional computing power

References in zbMATH (referenced in 27 articles )

Showing results 1 to 20 of 27.
Sorted by year (citations)

1 2 next

  1. Yu, Suxiang; Zhang, Shuai; Wang, Bin; Dun, Hua; Xu, Long; Huang, Xin; Shi, Ermin; Feng, Xinxing: Generative adversarial network based data augmentation to improve cervical cell classification model (2021)
  2. De Bock, Koen W.; Coussement, Kristof; Lessmann, Stefan: Cost-sensitive business failure prediction when misclassification costs are uncertain: a heterogeneous ensemble selection approach (2020)
  3. Kocheturov, Anton; Pardalos, Panos M.; Karakitsiou, Athanasia: Massive datasets and machine learning for computational biomedicine: trends and challenges (2019)
  4. Maurya, Chandresh Kumar; Toshniwal, Durga: Large-scale distributed sparse class-imbalance learning (2018)
  5. Vanhoeyveld, Jellis; Martens, David: Imbalanced classification in sparse and large behaviour datasets (2018)
  6. Wu, Yu-Ping; Lin, Hsuan-Tien: Progressive random (k)-labelsets for cost-sensitive multi-label classification (2017)
  7. Nikolaou, Nikolaos; Edakunni, Narayanan; Kull, Meelis; Flach, Peter; Brown, Gavin: Cost-sensitive boosting algorithms: do we really need them? (2016)
  8. Li, Qiujie; Mao, Yaobin: A review of boosting methods for imbalanced data classification (2014)
  9. López, Victoria; Fernández, Alberto; García, Salvador; Palade, Vasile; Herrera, Francisco: An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics (2013) ioport
  10. Yin, Qing-Yan; Zhang, Jiang-She; Zhang, Chun-Xia; Liu, Sheng-Cai: An empirical study on the performance of cost-sensitive boosting algorithms with different levels of class imbalance (2013) ioport
  11. Zhao, Hong; Min, Fan; Zhu, William: Cost-sensitive feature selection of numeric data with measurement errors (2013)
  12. Scott, Clayton: Calibrated asymmetric surrogate losses (2012)
  13. Yuan, Bo; Liu, Wenhuang: Measure oriented training: a targeted approach to imbalanced classification problems (2012) ioport
  14. Min, Fan; He, Huaping; Qian, Yuhua; Zhu, William: Test-cost-sensitive attribute reduction (2011) ioport
  15. Song, Jie; Lu, Xiaoling; Liu, Miao; Wu, Xizhi: Stratified normalization logitboost for two-class unbalanced data classification (2011)
  16. Kriegler, Brian; Berk, Richard: Small area estimation of the homeless in Los Angeles: an application of cost-sensitive stochastic gradient boosting (2010)
  17. Wang, Benjamin X.; Japkowicz, Nathalie: Boosting support vector machines for imbalanced data sets (2010) ioport
  18. Wu, Junjie; Xiong, Hui; Chen, Jian: COG: local decomposition for rare class analysis (2010) ioport
  19. Glady, Nicolas; Baesens, Bart; Croux, Christophe: Modeling churn using customer lifetime value (2009)
  20. Min, Fan; Liu, Qihe: A hierarchical model for test-cost-sensitive decision systems (2009)

1 2 next