GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers

📅 2024-11-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited interpretability of deep learning image classifiers—particularly their inability to reveal model rules and biases in safety-critical settings—this paper proposes the first post-hoc framework enabling global, textual, and causally verifiable explanations. Methodologically, it integrates visual counterfactual generation with vision-language model (VLM)-driven semantic translation to elevate local attributions into human-readable, globally coherent causal explanations, complemented by an intervention-based causal effect quantification mechanism. The core contribution is a unified “counterfactual–textual–causal” explanatory paradigm that jointly ensures readability, faithfulness, and attributable reasoning. Evaluated across diverse benchmarks—including CLEVR, CelebA, and BDD—the framework effectively uncovers implicit model logic, concept dependencies, and dataset biases. It achieves a 27% improvement in explanation faithfulness over prior methods. Code and pre-trained models are publicly released.

Technology Category

Application Category

📝 Abstract
Understanding deep models is crucial for deploying them in safety-critical applications. We introduce GIFT, a framework for deriving post-hoc, global, interpretable, and faithful textual explanations for vision classifiers. GIFT starts from local faithful visual counterfactual explanations and employs (vision) language models to translate those into global textual explanations. Crucially, GIFT provides a verification stage measuring the causal effect of the proposed explanations on the classifier decision. Through experiments across diverse datasets, including CLEVR, CelebA, and BDD, we demonstrate that GIFT effectively reveals meaningful insights, uncovering tasks, concepts, and biases used by deep vision classifiers. Our code, data, and models are released at https://github.com/valeoai/GIFT.
Problem

Research questions and friction points this paper is trying to address.

Deep Learning Interpretability
Image Classification
Bias Detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpretable AI
Visual Explanation
Deep Model Transparency
🔎 Similar Papers
No similar papers found.