InterpretML

InterpretML: A Unified Framework for Machine Learning Interpretability- InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from this http URL.


References in zbMATH (referenced in 6 articles )

Showing results 1 to 6 of 6.
Sorted by year (citations)

  1. Arya, Vijay; Bellamy, Rachel K. E.; Chen, Pin-Yu; Dhurandhar, Amit; Hind, Michael; Hoffman, Samuel C.; Houde, Stephanie; Liao, Q. Vera; Luss, Ronny; Mojsilović, Aleksandra; Mourad, Sami; Pedemonte, Pablo; Raghavendra, Ramya; Richards, John T.; Sattigeri, Prasanna; Shanmugam, Karthikeyan; Singh, Moninder; Varshney, Kush R.; Wei, Dennis; Zhang, Yunfeng: AI Explainability 360: an extensible toolkit for understanding data and machine learning models (2020)
  2. Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, Przemyslaw Biecek: dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python (2020) arXiv
  3. Kacper Sokol; Alexander Hepburn; Rafael Poyiadzi; Matthew Clifford; Raul Santos-Rodriguez; Peter Flach: FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems (2020) not zbMATH
  4. Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, Orion Reblitz-Richardson: Captum: A unified and generic model interpretability library for PyTorch (2020) arXiv
  5. Szymon Maksymiuk, Alicja Gosiewska, Przemyslaw Biecek: Landscape of R packages for eXplainable Artificial Intelligence (2020) arXiv
  6. Hubert Baniecki; Przemyslaw Biecek: modelStudio: Interactive Studio with Explanations for ML Predictive Models (2019) not zbMATH