MATEX: Multi-scale Attention and Text-guided Explainability of Medical Vision-Language Models

📅 2026-01-16
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited interpretability of existing medical vision-language models, which stems from their lack of anatomical grounding, imprecise spatial localization, and coarse attention granularity. To overcome these limitations, the authors propose a novel approach that integrates multi-scale attention rollout, text-guided spatial priors, and inter-layer consistency analysis to generate anatomically aligned, high-precision, and stable gradient attribution maps. Experimental results on the MS-CXR dataset demonstrate that the proposed method outperforms the current state-of-the-art M2IB model in both spatial localization accuracy and alignment with expert-annotated lesion regions, thereby significantly enhancing the clinical interpretability of medical AI systems.

Technology Category

Application Category

📝 Abstract
We introduce MATEX (Multi-scale Attention and Text-guided Explainability), a novel framework that advances interpretability in medical vision-language models by incorporating anatomically informed spatial reasoning. MATEX synergistically combines multi-layer attention rollout, text-guided spatial priors, and layer consistency analysis to produce precise, stable, and clinically meaningful gradient attribution maps. By addressing key limitations of prior methods, such as spatial imprecision, lack of anatomical grounding, and limited attention granularity, MATEX enables more faithful and interpretable model explanations. Evaluated on the MS-CXR dataset, MATEX outperforms the state-of-the-art M2IB approach in both spatial precision and alignment with expert-annotated findings. These results highlight MATEX's potential to enhance trust and transparency in radiological AI applications.
Problem

Research questions and friction points this paper is trying to address.

interpretability
medical vision-language models
spatial precision
anatomical grounding
attention granularity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-scale Attention
Text-guided Explainability
Medical Vision-Language Models
Anatomically Informed Reasoning
Gradient Attribution
🔎 Similar Papers
No similar papers found.