SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see our papers for details and citations).

References in zbMATH (referenced in 50 articles )

Showing results 1 to 20 of 50.
Sorted by year (citations)

1 2 3 next

  1. Blanquero, Rafael; Carrizosa, Emilio; Molero-Río, Cristina; Romero Morales, Dolores: On sparse optimal regression trees (2022)
  2. Davila-Pena, Laura; García-Jurado, Ignacio; Casas-Méndez, Balbina: Assessment of the influence of features on a classification problem: an application to COVID-19 patients (2022)
  3. Longo, Luigi; Riccaboni, Massimo; Rungi, Armando: A neural network ensemble approach for GDP forecasting (2022)
  4. Ras, Gabrielle; Xie, Ning; van Gerven, Marcel; Doran, Derek: Explainable deep learning: a field guide for the uninitiated (2022)
  5. Aas, Kjersti; Jullum, Martin; Løland, Anders: Explaining individual predictions when features are dependent: more accurate approximations to Shapley values (2021)
  6. Bogaerts, Bart; Gamba, Emilio; Guns, Tias: A framework for step-wise explaining how to solve constraint satisfaction problems (2021)
  7. Burkart, Nadia; Huber, Marco F.: A survey on the explainability of supervised machine learning (2021)
  8. Carrizosa, Emilio; Molero-Río, Cristina; Romero Morales, Dolores: Mathematical optimization in classification and regression trees (2021)
  9. Cheng, Lu; Varshney, Kush R.; Liu, Huan: Socially responsible AI algorithms: issues, purposes, and challenges (2021)
  10. Confalonieri, Roberto; Weyde, Tillman; Besold, Tarek R.; Moscoso del Prado Martín, Fermín: Using ontologies to enhance human understandability of global post-hoc explanations of black-box models (2021)
  11. Delen, Dursun; Zolbanin, Hamed M.; Crosby, Durand; Wright, David: To imprison or not to imprison: an analytics model for drug courts (2021)
  12. Guidotti, Riccardo: Evaluating local explanation methods on ground truth (2021)
  13. Gunnarsson, Björn Rafn; vanden Broucke, Seppe; Baesens, Bart; Óskarsdóttir, María; Lemahieu, Wilfried: Deep learning for credit scoring: do or don’t? (2021)
  14. Hooker, Giles; Mentch, Lucas; Zhou, Siyu: Unrestricted permutation forces extrapolation: variable importance requires at least one more model, or there is no free variable importance (2021)
  15. Ignatiev, Alexey; Marques-Silva, Joao: SAT-based rigorous explanations for decision lists (2021)
  16. Janizek, Joseph D.; Sturmfels, Pascal; Lee, Su-In: Explaining explanations: axiomatic feature interactions for deep networks (2021)
  17. Kenny, Eoin M.; Ford, Courtney; Quinn, Molly; Keane, Mark T.: Explaining black-box classifiers using \textitpost-hoc explanations-by-example: the effect of explanations and error-rates in XAI user studies (2021)
  18. Kovalev, Maxim; Utkin, Lev; Coolen, Frank; Konstantinov, Andrei: Counterfactual explanation of machine learning survival models (2021)
  19. Li, Jingyi Jessica; Chen, Yiling Elaine; Tong, Xin: A flexible model-free prediction-based framework for feature ranking (2021)
  20. Livshits, Ester; Bertossi, Leopoldo; Kimelfeld, Benny; Sebag, Moshe: The Shapley value of tuples in query answering (2021)

1 2 3 next