DuSCN-FusionNet: An Interpretable Dual-Channel Structural Covariance Fusion Framework for ADHD Classification Using Structural MRI

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited reliability and interpretability of existing neuroimaging biomarkers for attention-deficit/hyperactivity disorder (ADHD), which hinder clinical diagnosis. To overcome this, the authors propose a dual-channel structural covariance network (SCN) that separately models regional mean intensity and intra-regional heterogeneity. The framework integrates region-of-interest (ROI)-level variability features with global statistical measures through late fusion. Innovatively adapting Grad-CAM to the SCN architecture, the method generates regional importance scores, thereby enhancing model interpretability and uncovering potential anatomical biomarkers. Evaluated on the ADHD-200 Beijing site dataset, the approach achieves a balanced accuracy of 80.59%, an AUC of 0.778, precision of 81.66%, recall of 80.59%, and an F1-score of 80.27%.

Technology Category

Application Category

📝 Abstract
Attention Deficit Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental condition; however, its neurobiological diagnosis remains challenging due to the lack of reliable imaging-based biomarkers, particularly anatomical markers. Structural MRI (sMRI) provides a non-invasive modality for investigating brain alterations associated with ADHD; nevertheless, most deep learning approaches function as black-box systems, limiting clinical trust and interpretability. In this work, we propose DuSCN-FusionNet, an interpretable sMRI-based framework for ADHD classification that leverages dual-channel Structural Covariance Networks (SCNs) to capture inter-regional morphological relationships. ROI-wise mean intensity and intra-regional variability descriptors are used to construct intensity-based and heterogeneity-based SCNs, which are processed through an SCN-CNN encoder. In parallel, auxiliary ROI-wise variability features and global statistical descriptors are integrated via late-stage fusion to enhance performance. The model is evaluated using stratified 10-fold cross-validation with a 5-seed ensemble strategy, achieving a mean balanced accuracy of 80.59% and an AUC of 0.778 on the Peking University site of the ADHD-200 dataset. DuSCN-FusionNet further achieves precision, recall, and F1-scores of 81.66%, 80.59%, and 80.27%, respectively. Moreover, Grad-CAM is adapted to the SCN domain to derive ROI-level importance scores, enabling the identification of structurally relevant brain regions as potential biomarkers.
Problem

Research questions and friction points this paper is trying to address.

ADHD
structural MRI
biomarkers
interpretability
neurodevelopmental disorder
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structural Covariance Network
Interpretable Deep Learning
ADHD Classification
Dual-Channel Fusion
Grad-CAM for sMRI
🔎 Similar Papers
No similar papers found.