A simple generalisation of the area under the ROC curve for multiple class classification problems The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.

References in zbMATH (referenced in 49 articles , 1 standard article )

Showing results 1 to 20 of 49.
Sorted by year (citations)

1 2 3 next

  1. Ting, Kai Ming; Washio, Takashi; Wells, Jonathan R.; Aryal, Sunil: Defying the gravity of learning curve: a characteristic of nearest neighbour anomaly detectors (2017)
  2. Wang, Shijun; Li, Diana; Petrick, Nicholas; Sahiner, Berkman; Linguraru, Marius George; Summers, Ronald M.: Optimizing area under the ROC curve using semi-supervised learning (2015)
  3. Daqi, Gao; Jun, Ding; Changming, Zhu: Integrated Fisher linear discriminants: an empirical study (2014)
  4. Feng, Guang; Zhang, Jia-Dong; Shaoyi Liao, Stephen: A novel method for combining Bayesian networks, theoretical analysis, and its applications (2014)
  5. He, Yu-Lin; Wang, Ran; Kwong, Sam; Wang, Xi-Zhao: Bayesian classifiers based on probability density estimation and their applications to simultaneous fault diagnosis (2014)
  6. Katz, Gilad; Shabtai, Asaf; Rokach, Lior; Ofek, Nir: ConfDTree: a statistical method for improving decision trees (2014) ioport
  7. Montvida, Olga; Klawonn, Frank: Relative cost curves: an alternative to AUC and an extension to 3-class problems (2014)
  8. Clémençon, Stéphan; Robbiano, Sylvain; Vayatis, Nicolas: Ranking data with ordinal labels: optimality and pairwise aggregation (2013)
  9. Cieslak, David A.; Hoens, T.Ryan; Chawla, Nitesh V.; Kegelmeyer, W.Philip: Hellinger distance decision trees are robust and skew-insensitive (2012)
  10. Flach, Peter: Machine learning. The art and science of algorithms that make sense of data. (2012)
  11. Shiga, Motoki; Mamitsuka, Hiroshi: Efficient semi-supervised learning on locally informative multiple graphs (2012)
  12. Yong, Suet-Peng; Deng, Jeremiah D.; Purvis, Martin K.: Novelty detection in wildlife scenes through semantic context modelling (2012) ioport
  13. Carrizosa, Emilio; Martin-Barragan, Belen: Maximizing upgrading and downgrading margins for ordinal regression (2011)
  14. Fernández-Navarro, Francisco; Hervás-Martínez, César; Gutiérrez, Pedro Antonio: A dynamic over-sampling procedure based on sensitivity for multi-class problems (2011)
  15. Japkowicz, Nathalie; Shah, Mohak: Evaluating learning algorithms. A classification perspective (2011)
  16. Jiang, Liangxiao: Learning random forests for ranking (2011) ioport
  17. Reid, Mark D.; Williamson, Robert C.: Information, divergence and risk for binary experiments (2011)
  18. Waegeman, Willem; De Baets, Bernard: On the ERA ranking representability of pairwise bipartite ranking functions (2011)
  19. Waegeman, Willem; De Baets, Bernard: A transitivity analysis of bipartite rankings in pairwise multi-class classification (2010)
  20. Hamdi, Mohamed; Meddeb-Makhlouf, Amel; Boudriga, Noureddine: Multilayer statistical intrusion detection in wireless networks (2009)

1 2 3 next