XMorph: Explainable Brain Tumor Analysis Via LLM-Assisted Hybrid Deep Intelligence

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes XMorph, a novel framework addressing the challenges of poor interpretability, high computational cost, and imprecise delineation of irregular tumor boundaries in deep learning–based brain tumor diagnosis. XMorph innovatively integrates nonlinear chaotic features with clinically validated features, introduces an Information-Weighted Boundary Normalization (IWBN) mechanism, and incorporates a dual-channel interpretable AI module that simultaneously generates GradCAM++ heatmaps and large language model–driven clinical text explanations. Evaluated on a three-class classification task involving glioma, meningioma, and pituitary tumor, the model achieves 96.0% accuracy while significantly enhancing transparency and clinical trustworthiness, demonstrating that high performance and strong interpretability can be synergistically achieved. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Deep learning has significantly advanced automated brain tumor diagnosis, yet clinical adoption remains limited by interpretability and computational constraints. Conventional models often act as opaque''black boxes''and fail to quantify the complex, irregular tumor boundaries that characterize malignant growth. To address these challenges, we present XMorph, an explainable and computationally efficient framework for fine-grained classification of three prominent brain tumor types: glioma, meningioma, and pituitary tumors. We propose an Information-Weighted Boundary Normalization (IWBN) mechanism that emphasizes diagnostically relevant boundary regions alongside nonlinear chaotic and clinically validated features, enabling a richer morphological representation of tumor growth. A dual-channel explainable AI module combines GradCAM++ visual cues with LLM-generated textual rationales, translating model reasoning into clinically interpretable insights. The proposed framework achieves a classification accuracy of 96.0%, demonstrating that explainability and high performance can co-exist in AI-based medical imaging systems. The source code and materials for XMorph are all publicly available at: https://github.com/ALSER-Lab/XMorph.
Problem

Research questions and friction points this paper is trying to address.

explainability
brain tumor analysis
tumor boundary
computational efficiency
clinical interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Information-Weighted Boundary Normalization
LLM-assisted reasoning
Brain tumor classification
Hybrid deep intelligence
🔎 Similar Papers
No similar papers found.
S
Sepehr Salem Ghahfarokhi
Department of Computer Science, Georgia State University, Atlanta, GA, USA
M
M. Moein Esfahani
TReNDS Center, Georgia State University, Atlanta, GA, USA
R
Raj Sunderraman
Department of Computer Science, Georgia State University, Atlanta, GA, USA
V
Vince Calhoun
TReNDS Center, Georgia State University, Atlanta, GA, USA
Mohammed Alser
Mohammed Alser
TT Assistant Professor, GSU, ALSER Lab
BioinformaticsMetagenomicsComputational GenomicsComputer Architecture