AdaCost: Misclassification cost-sensitive boosting. AdaCost, a variant of AdaBoost, is a misclassification cost-sensitive boosting method. It uses the cost of misclassifications to update the training distribution on successive boosting rounds. The purpose is to reduce the cumulative misclassification cost more than AdaBoost. We formally show that AdaCost reduces the upper bound of cumulative misclassification cost of the training set. Empirical evaluations have shown significant reduction in the cumulative misclassification cost over AdaBoost without consuming additional computing power

References in zbMATH (referenced in 29 articles )

Showing results 1 to 20 of 29.
Sorted by year (citations)

1 2 next

  1. Hauser, Matthias; Flath, Christoph M.; Thiesse, Frédéric: Catch me if you scan: data-driven prescriptive modeling for smart store environments (2021)
  2. Maliah, Shlomi; Shani, Guy: Using POMDPs for learning cost sensitive decision trees (2021)
  3. Yu, Suxiang; Zhang, Shuai; Wang, Bin; Dun, Hua; Xu, Long; Huang, Xin; Shi, Ermin; Feng, Xinxing: Generative adversarial network based data augmentation to improve cervical cell classification model (2021)
  4. De Bock, Koen W.; Coussement, Kristof; Lessmann, Stefan: Cost-sensitive business failure prediction when misclassification costs are uncertain: a heterogeneous ensemble selection approach (2020)
  5. Kocheturov, Anton; Pardalos, Panos M.; Karakitsiou, Athanasia: Massive datasets and machine learning for computational biomedicine: trends and challenges (2019)
  6. Maurya, Chandresh Kumar; Toshniwal, Durga: Large-scale distributed sparse class-imbalance learning (2018)
  7. Vanhoeyveld, Jellis; Martens, David: Imbalanced classification in sparse and large behaviour datasets (2018)
  8. Wu, Yu-Ping; Lin, Hsuan-Tien: Progressive random (k)-labelsets for cost-sensitive multi-label classification (2017)
  9. Nikolaou, Nikolaos; Edakunni, Narayanan; Kull, Meelis; Flach, Peter; Brown, Gavin: Cost-sensitive boosting algorithms: do we really need them? (2016)
  10. Li, Qiujie; Mao, Yaobin: A review of boosting methods for imbalanced data classification (2014)
  11. López, Victoria; Fernández, Alberto; García, Salvador; Palade, Vasile; Herrera, Francisco: An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics (2013) ioport
  12. Yin, Qing-Yan; Zhang, Jiang-She; Zhang, Chun-Xia; Liu, Sheng-Cai: An empirical study on the performance of cost-sensitive boosting algorithms with different levels of class imbalance (2013) ioport
  13. Zhao, Hong; Min, Fan; Zhu, William: Cost-sensitive feature selection of numeric data with measurement errors (2013)
  14. Scott, Clayton: Calibrated asymmetric surrogate losses (2012)
  15. Yuan, Bo; Liu, Wenhuang: Measure oriented training: a targeted approach to imbalanced classification problems (2012) ioport
  16. Min, Fan; He, Huaping; Qian, Yuhua; Zhu, William: Test-cost-sensitive attribute reduction (2011) ioport
  17. Song, Jie; Lu, Xiaoling; Liu, Miao; Wu, Xizhi: Stratified normalization logitboost for two-class unbalanced data classification (2011)
  18. Kriegler, Brian; Berk, Richard: Small area estimation of the homeless in Los Angeles: an application of cost-sensitive stochastic gradient boosting (2010)
  19. Wang, Benjamin X.; Japkowicz, Nathalie: Boosting support vector machines for imbalanced data sets (2010) ioport
  20. Wu, Junjie; Xiong, Hui; Chen, Jian: COG: local decomposition for rare class analysis (2010) ioport

1 2 next