CAuSE: Decoding Multimodal Classifiers using Faithful Natural Language Explanation

πŸ“… 2025-12-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Multimodal classifiers lack faithful and interpretable natural language explanations (NLEs), and existing methods fail to accurately reflect their internal decision-making logic. To address this, we propose CAuSE, the first framework that integrates causal abstraction modeling with interleaved intervention training to mechanistically emulate the model’s causal reasoning process and generate high-fidelity NLEs. We introduce a novel causal fidelity metric to quantitatively evaluate explanation quality. Experiments across multiple benchmark datasets demonstrate that CAuSE significantly outperforms state-of-the-art methods, achieving substantial gains in causal fidelity. Qualitative analysis and systematic error analysis further confirm the reasonableness and robustness of its explanations. The code is publicly available.

Technology Category

Application Category

πŸ“ Abstract
Multimodal classifiers function as opaque black box models. While several techniques exist to interpret their predictions, very few of them are as intuitive and accessible as natural language explanations (NLEs). To build trust, such explanations must faithfully capture the classifier's internal decision making behavior, a property known as faithfulness. In this paper, we propose CAuSE (Causal Abstraction under Simulated Explanations), a novel framework to generate faithful NLEs for any pretrained multimodal classifier. We demonstrate that CAuSE generalizes across datasets and models through extensive empirical evaluations. Theoretically, we show that CAuSE, trained via interchange intervention, forms a causal abstraction of the underlying classifier. We further validate this through a redesigned metric for measuring causal faithfulness in multimodal settings. CAuSE surpasses other methods on this metric, with qualitative analysis reinforcing its advantages. We perform detailed error analysis to pinpoint the failure cases of CAuSE. For replicability, we make the codes available at https://github.com/newcodevelop/CAuSE
Problem

Research questions and friction points this paper is trying to address.

Generates faithful natural language explanations for multimodal classifiers
Measures causal faithfulness in multimodal settings
Generalizes across datasets and models via causal abstraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates faithful natural language explanations for classifiers
Uses causal abstraction via interchange intervention training
Introduces new metric for measuring causal faithfulness
πŸ”Ž Similar Papers
No similar papers found.