shap

SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see our papers for details and citations).


References in zbMATH (referenced in 19 articles )

Showing results 1 to 19 of 19.
Sorted by year (citations)

  1. Burkart, Nadia; Huber, Marco F.: A survey on the explainability of supervised machine learning (2021)
  2. Carrizosa, Emilio; Molero-Río, Cristina; Romero Morales, Dolores: Mathematical optimization in classification and regression trees (2021)
  3. Martino, Ivan: Cooperative games on simplicial complexes (2021)
  4. Sun, Yilun; Wang, Lu: Stochastic tree search for estimating optimal dynamic treatment regimes (2021)
  5. Arya, Vijay; Bellamy, Rachel K. E.; Chen, Pin-Yu; Dhurandhar, Amit; Hind, Michael; Hoffman, Samuel C.; Houde, Stephanie; Liao, Q. Vera; Luss, Ronny; Mojsilović, Aleksandra; Mourad, Sami; Pedemonte, Pablo; Raghavendra, Ramya; Richards, John T.; Sattigeri, Prasanna; Shanmugam, Karthikeyan; Singh, Moninder; Varshney, Kush R.; Wei, Dennis; Zhang, Yunfeng: AI Explainability 360: an extensible toolkit for understanding data and machine learning models (2020)
  6. Huang, Xiaowei; Kroening, Daniel; Ruan, Wenjie; Sharp, James; Sun, Youcheng; Thamo, Emese; Wu, Min; Yi, Xinping: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability (2020)
  7. Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, Przemyslaw Biecek: dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python (2020) arXiv
  8. Lavrač, Nada; Škrlj, Blaž; Robnik-Šikonja, Marko: Propositionalization and embeddings: two sides of the same coin (2020)
  9. Lee, Taito; Matsushima, Shin; Yamanishi, Kenji: Grafting for combinatorial binary model using frequent itemset mining (2020)
  10. Lesage, Laurent; Deaconu, Madalina; Lejay, Antoine; Meira, Jorge Augusto; Nichil, Geoffrey; State, Radu: A recommendation system for car insurance (2020)
  11. Ramon, Yanou; Martens, David; Provost, Foster; Evgeniou, Theodoros: A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C (2020)
  12. Shakerin, Farhad; Gupta, Gopal: White-box induction from SVM models: explainable AI with logic programming (2020)
  13. Wu, Min; Wicker, Matthew; Ruan, Wenjie; Huang, Xiaowei; Kwiatkowska, Marta: A game-based approximate verification of deep neural networks with provable guarantees (2020)
  14. Zhang, Kaixuan; Wang, Qinglong; Liu, Xue; Giles, C. Lee: Shapley homology: topological analysis of sample influence for neural networks (2020)
  15. Grabisch, Michel; Labreuche, Christophe; Ridaoui, Mustapha: On importance indices in multicriteria decision making (2019)
  16. Hubert Baniecki; Przemyslaw Biecek: modelStudio: Interactive Studio with Explanations for ML Predictive Models (2019) not zbMATH
  17. Labreuche, Christophe: Explaining hierarchical multi-linear models (2019)
  18. Sellereite, Nikolai; Jullum, Martin: shapr: An R-package for explaining machine learning models with dependence-aware Shapley values (2019) not zbMATH
  19. Biecek, Przemysław: DALEX: explainers for complex predictive models in \textttR (2018)