AIF360: AI Fairness 360 Open Source Toolkit. This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Containing over 70 fairness metrics and 10 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. We invite you to use it and improve it.
Keywords for this software
References in zbMATH (referenced in 6 articles )
Showing results 1 to 6 of 6.
- Cheng, Lu; Varshney, Kush R.; Liu, Huan: Socially responsible AI algorithms: issues, purposes, and challenges (2021)
- Romeo Kienzler, Ivan Nesic: CLAIMED, a visual and scalable component library for Trusted AI (2021) arXiv
- Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, Przemyslaw Biecek: dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python (2020) arXiv
- Kacper Sokol; Alexander Hepburn; Rafael Poyiadzi; Matthew Clifford; Raul Santos-Rodriguez; Peter Flach: FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems (2020) not zbMATH
- Plečko, Drago; Meinshausen, Nicolai: Fair data adaptation with quantile preservation (2020)
- Szymon Maksymiuk, Alicja Gosiewska, Przemyslaw Biecek: Landscape of R packages for eXplainable Artificial Intelligence (2020) arXiv