Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. We propose a technique for producing ”visual explanations” for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting important regions in the image for predicting the concept. Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers, (2) CNNs used for structured outputs, (3) CNNs used in tasks with multimodal inputs or reinforcement learning, without any architectural changes or re-training. We combine Grad-CAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes, (b) are robust to adversarial images, (c) outperform previous methods on localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, we show that even non-attention based models can localize inputs. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM helps users establish appropriate trust in predictions from models and show that Grad-CAM helps untrained users successfully discern a ’stronger’ nodel from a ’weaker’ one even when both make identical predictions.

References in zbMATH (referenced in 22 articles , 1 standard article )

Showing results 1 to 20 of 22.
Sorted by year (citations)

1 2 next

  1. Paul Pu Liang, Yiwei Lyu, Gunjan Chhablani, Nihal Jain, Zihao Deng, Xingbo Wang, Louis-Philippe Morency, Ruslan Salakhutdinov: MultiViz: An Analysis Benchmark for Visualizing and Understanding Multimodal Models (2022) arXiv
  2. Ras, Gabrielle; Xie, Ning; van Gerven, Marcel; Doran, Derek: Explainable deep learning: a field guide for the uninitiated (2022)
  3. Bi, Xin; Zhang, Chao; He, Yao; Zhao, Xiangguo; Sun, Yongjiao; Ma, Yuliang: Explainable time-frequency convolutional neural network for microseismic waveform classification (2021)
  4. Bogaerts, Bart; Gamba, Emilio; Guns, Tias: A framework for step-wise explaining how to solve constraint satisfaction problems (2021)
  5. Burkart, Nadia; Huber, Marco F.: A survey on the explainability of supervised machine learning (2021)
  6. Carmichael, Iain; Calhoun, Benjamin C.; Hoadley, Katherine A.; Troester, Melissa A.; Geradts, Joseph; Couture, Heather D.; Olsson, Linnea; Perou, Charles M.; Niethammer, Marc; Hannig, Jan; Marron, J. S.: Joint and individual analysis of breast cancer histologic images and genomic covariates (2021)
  7. Cozman, Fabio Gagliardi; Munhoz, Hugo Neri: Some thoughts on knowledge-enhanced machine learning (2021)
  8. Ghadai, Sambit; Lee, Xian Yeow; Balu, Aditya; Sarkar, Soumik; Krishnamurthy, Adarsh: Multi-resolution 3D CNN for learning multi-scale spatial features in CAD models (2021)
  9. Guillaume Jaume, Pushpak Pati, Valentin Anklin, Antonio Foncubierta, Maria Gabrani: HistoCartography: A Toolkit for Graph Analytics in Digital Pathology (2021) arXiv
  10. Hoyt, Christopher; Owen, Art B.: Efficient estimation of the ANOVA mean dimension, with an application to neural net classification (2021)
  11. Kiermayer, Mark; Weiß, Christian: Grouping of contracts in insurance using neural networks (2021)
  12. Li, Tianlin; Liu, Aishan; Liu, Xianglong; Xu, Yitao; Zhang, Chongzhi; Xie, Xiaofei: Understanding adversarial robustness via critical attacking route (2021)
  13. Müller, Heimo; Holzinger, Andreas: Kandinsky patterns (2021)
  14. Olson, Matthew L.; Khanna, Roli; Neal, Lawrence; Li, Fuxin; Wong, Weng-Keen: Counterfactual state explanations for reinforcement learning agents via generative deep learning (2021)
  15. Qi, Zhongang; Khorram, Saeed; Fuxin, Li: Embedding deep networks into visual explanations (2021)
  16. Wu, Mike; Parbhoo, Sonali; Hughes, Michael C.; Roth, Volker; Doshi-Velez, Finale: Optimizing for interpretability in deep neural networks with tree regularization (2021)
  17. Xiang, Zhen; Miller, David J.; Wang, Hang; Kesidis, George: Detecting scene-plausible perceptible backdoors in trained DNNs without access to the training set (2021)
  18. Huang, Xiaowei; Kroening, Daniel; Ruan, Wenjie; Sharp, James; Sun, Youcheng; Thamo, Emese; Wu, Min; Yi, Xinping: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability (2020)
  19. Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, Orion Reblitz-Richardson: Captum: A unified and generic model interpretability library for PyTorch (2020) arXiv
  20. Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, Ting Wang: TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask) (2020) arXiv

1 2 next