Uncertainty-Aware Variational Information Pursuit for Interpretable Medical Image Analysis

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for explainable AI in medical imaging struggle to balance interpretability and reliability, primarily by neglecting instance-level uncertainty—leading to untrustworthy explanations. To address this, we propose an uncertainty-aware end-to-end interpretable learning paradigm. For the first time, our approach jointly models epistemic and aleatoric uncertainty within the Variational Information Bottleneck (V-IB) framework, integrating Bayesian deep learning, concept-level attention, and differentiable concept discovery. This enables precise, uncertainty-quantified attribution while preserving explanation conciseness and clinical interpretability. Evaluated on four benchmark datasets—PH2, Derm7pt, BrEaST, and SkinCon—our method achieves a 3.2% average AUC improvement, reduces explanation length by 20%, and maintains full information retention. These results significantly enhance model trustworthiness and clinical applicability.

Technology Category

Application Category

📝 Abstract
In medical imaging, AI decision-support systems must balance accuracy and interpretability to build user trust and support effective clinical decision-making. Recently, Variational Information Pursuit (V-IP) and its variants have emerged as interpretable-by-design modeling techniques, aiming to explain AI decisions in terms of human-understandable, clinically relevant concepts. However, existing V-IP methods overlook instance-level uncertainties in query-answer generation, which can arise from model limitations (epistemic uncertainty) or variability in expert responses (aleatoric uncertainty). This paper introduces Uncertainty-Aware V-IP (UAV-IP), a novel framework that integrates uncertainty quantification into the V-IP process. We evaluate UAV-IP across four medical imaging datasets, PH2, Derm7pt, BrEaST, and SkinCon, demonstrating an average AUC improvement of approximately 3.2% while generating 20% more concise explanations compared to baseline V-IP, without sacrificing informativeness. These findings highlight the importance of uncertainty-aware reasoning in interpretable by design models for robust and reliable medical decision-making.
Problem

Research questions and friction points this paper is trying to address.

Balancing accuracy and interpretability in medical AI systems
Addressing uncertainties in interpretable medical image analysis
Improving reliability of AI decisions in clinical settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates uncertainty quantification into V-IP
Improves AUC by 3.2% on medical datasets
Generates 20% more concise explanations
🔎 Similar Papers
No similar papers found.