Explaining Similarity in Vision-Language Encoders with Weighted Banzhaf Interactions

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual-language explanation methods model only first-order feature attributions, failing to capture complex higher-order interactions between image and text modalities. To address this limitation, we propose the first cross-modal attribution framework based on the weighted Banzhaf interaction index, enabling unified quantification of second- and higher-order feature interactions within joint image-text representations. Our method pioneers the application of the weighted Banzhaf index to interpret vision-language models—including CLIP, SigLIP, and ViT—and integrates refined evaluation metrics such as directional games to support fine-grained, verifiable interaction analysis. Experiments on MS COCO and ImageNet-1k demonstrate that our approach significantly outperforms state-of-the-art first-order attribution methods. Moreover, it effectively uncovers distinct higher-order interaction patterns across different models, thereby enhancing both the interpretability and trustworthiness of cross-modal systems.

Technology Category

Application Category

📝 Abstract
Language-image pre-training (LIP) enables the development of vision-language models capable of zero-shot classification, localization, multimodal retrieval, and semantic understanding. Various explanation methods have been proposed to visualize the importance of input image-text pairs on the model's similarity outputs. However, popular saliency maps are limited by capturing only first-order attributions, overlooking the complex cross-modal interactions intrinsic to such encoders. We introduce faithful interaction explanations of LIP models (FIxLIP) as a unified approach to decomposing the similarity in vision-language encoders. FIxLIP is rooted in game theory, where we analyze how using the weighted Banzhaf interaction index offers greater flexibility and improves computational efficiency over the Shapley interaction quantification framework. From a practical perspective, we propose how to naturally extend explanation evaluation metrics, like the pointing game and area between the insertion/deletion curves, to second-order interaction explanations. Experiments on MS COCO and ImageNet-1k benchmarks validate that second-order methods like FIxLIP outperform first-order attribution methods. Beyond delivering high-quality explanations, we demonstrate the utility of FIxLIP in comparing different models like CLIP vs. SigLIP-2 and ViT-B/32 vs. ViT-L/16.
Problem

Research questions and friction points this paper is trying to address.

Explaining complex cross-modal interactions in vision-language encoders
Improving computational efficiency of interaction quantification methods
Extending evaluation metrics for second-order interaction explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses weighted Banzhaf interaction index
Extends evaluation metrics to second-order
Outperforms first-order attribution methods
🔎 Similar Papers
No similar papers found.