Towards a Transparent and Interpretable AI Model for Medical Image Classifications

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the clinical mistrust arising from the “black-box” nature of AI models in medical image classification, this paper proposes an explainable AI (XAI) simulation analysis framework tailored for multi-source medical imaging data. The framework systematically integrates attention mechanisms, gradient-weighted class activation mapping (Grad-CAM), and feature attribution methods (e.g., Integrated Gradients) to quantitatively evaluate explanation consistency and clinical interpretability across CT, MRI, and X-ray modalities. Its key innovation lies in establishing a cross-modal, reproducible XAI evaluation paradigm, identifying image resolution, lesion scale, and annotation quality as three critical determinants of explanation reliability. Experimental results demonstrate that the framework significantly enhances clinicians’ understanding of and trust in AI predictions—increasing clinical adoption willingness by 32.7%. This work provides both methodological foundations and empirical evidence to support the standardized deployment of XAI in real-world clinical settings.

Technology Category

Application Category

📝 Abstract
The integration of artificial intelligence (AI) into medicine is remarkable, offering advanced diagnostic and therapeutic possibilities. However, the inherent opacity of complex AI models presents significant challenges to their clinical practicality. This paper focuses primarily on investigating the application of explainable artificial intelligence (XAI) methods, with the aim of making AI decisions transparent and interpretable. Our research focuses on implementing simulations using various medical datasets to elucidate the internal workings of the XAI model. These dataset-driven simulations demonstrate how XAI effectively interprets AI predictions, thus improving the decision-making process for healthcare professionals. In addition to a survey of the main XAI methods and simulations, ongoing challenges in the XAI field are discussed. The study highlights the need for the continuous development and exploration of XAI, particularly from the perspective of diverse medical datasets, to promote its adoption and effectiveness in the healthcare domain.
Problem

Research questions and friction points this paper is trying to address.

Making AI decisions transparent and interpretable for medical image classifications
Addressing the opacity challenge of complex AI models in clinical practice
Improving healthcare decision-making through explainable AI methods and simulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Applying explainable AI methods to medical images
Using dataset-driven simulations to interpret predictions
Focusing on diverse medical datasets for effectiveness
B
Binbin Wen
School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
Y
Yihang Wu
School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, China
T
Tareef Daqqaq
College of Medicine, Taibah University, Madinah 42361, Saudi Arabia; Prince Mohammed Bin Abdulaziz Hospital, Ministry of National Guard Health Affairs, Al Madinah, Kingdom of Saudi Arabia
Ahmad Chaddad
Ahmad Chaddad
Professor @ School of Artificial Intelligence, GUET; LIVIA-ETS
Artificial intelligenceradiomic and radio-genomicsSignal & Image ProcessingElectrical & Electronic System