🤖 AI Summary
Autism Spectrum Disorder (ASD) diagnosis lacks objectivity and biological interpretability. Method: We propose an interpretable deep learning framework integrating Graph Convolutional Networks (GCNs) with attention-guided Grad-CAM, trained on multi-site ABIDE fMRI data (N=884) for automated ASD classification. To our knowledge, this is the first approach in ASD diagnosis to jointly optimize high classification accuracy and neurobiological interpretability via functional connectivity modeling and explainability-driven joint training. Contribution/Results: The model precisely localizes discriminative brain regions—including the default mode network, amygdala, and prefrontal cortex—validated across independent datasets and complementary neuroimaging modalities. It achieves 89.7% classification accuracy, significantly outperforming existing baselines. This work establishes a novel paradigm for early, mechanism-grounded, and objective ASD diagnosis.
📝 Abstract
Early diagnosis and intervention for Autism Spectrum Disorder (ASD) has been shown to significantly improve the quality of life of autistic individuals. However, diagnostics methods for ASD rely on assessments based on clinical presentation that are prone to bias and can be challenging to arrive at an early diagnosis. There is a need for objective biomarkers of ASD which can help improve diagnostic accuracy. Deep learning (DL) has achieved outstanding performance in diagnosing diseases and conditions from medical imaging data. Extensive research has been conducted on creating models that classify ASD using resting-state functional Magnetic Resonance Imaging (fMRI) data. However, existing models lack interpretability. This research aims to improve the accuracy and interpretability of ASD diagnosis by creating a DL model that can not only accurately classify ASD but also provide explainable insights into its working. The dataset used is a preprocessed version of the Autism Brain Imaging Data Exchange (ABIDE) with 884 samples. Our findings show a model that can accurately classify ASD and highlight critical brain regions differing between ASD and typical controls, with potential implications for early diagnosis and understanding of the neural basis of ASD. These findings are validated by studies in the literature that use different datasets and modalities, confirming that the model actually learned characteristics of ASD and not just the dataset. This study advances the field of explainable AI in medical imaging by providing a robust and interpretable model, thereby contributing to a future with objective and reliable ASD diagnostics.