Explainability Through Human-Centric Design for XAI in Lung Cancer Detection

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning models face clinical deployment barriers in lung cancer detection due to opaque decision-making. To address this, we propose XpertXAI—a clinician-guided, concept-bottleneck model (CBM) designed for clinical trustworthiness. XpertXAI explicitly incorporates radiologists’ clinical reasoning into a supervised CBM architecture (built upon InceptionV3), jointly optimizing multi-pathology pulmonary diagnosis and fine-grained, concept-level interpretability by integrating structured radiology reports and expert-annotated imaging concepts. Compared with mainstream post-hoc explanation methods and unsupervised CBMs, XpertXAI achieves significantly higher lung cancer detection accuracy. It attains a 92% recall rate for critical imaging signs and markedly improves alignment between concept-level explanations and radiologist judgments. These results empirically validate the efficacy and clinical suitability of expert-driven architectural design in interpretable medical AI.

Technology Category

Application Category

📝 Abstract
Deep learning models have shown promise in lung pathology detection from chest X-rays, but widespread clinical adoption remains limited due to opaque model decision-making. In prior work, we introduced ClinicXAI, a human-centric, expert-guided concept bottleneck model (CBM) designed for interpretable lung cancer diagnosis. We now extend that approach and present XpertXAI, a generalizable expert-driven model that preserves human-interpretable clinical concepts while scaling to detect multiple lung pathologies. Using a high-performing InceptionV3-based classifier and a public dataset of chest X-rays with radiology reports, we compare XpertXAI against leading post-hoc explainability methods and an unsupervised CBM, XCBs. We assess explanations through comparison with expert radiologist annotations and medical ground truth. Although XpertXAI is trained for multiple pathologies, our expert validation focuses on lung cancer. We find that existing techniques frequently fail to produce clinically meaningful explanations, omitting key diagnostic features and disagreeing with radiologist judgments. XpertXAI not only outperforms these baselines in predictive accuracy but also delivers concept-level explanations that better align with expert reasoning. While our focus remains on explainability in lung cancer detection, this work illustrates how human-centric model design can be effectively extended to broader diagnostic contexts - offering a scalable path toward clinically meaningful explainable AI in medical diagnostics.
Problem

Research questions and friction points this paper is trying to address.

Enhancing explainability in lung cancer detection models
Addressing opaque decision-making in deep learning diagnostics
Scaling human-interpretable concepts for multiple lung pathologies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-centric expert-guided concept bottleneck model
Generalizable expert-driven model for multiple pathologies
High-performing InceptionV3-based classifier integration
🔎 Similar Papers
No similar papers found.