🤖 AI Summary
Traditional Transformers applied to resting-state fMRI face prohibitive O(N²) computational complexity, excessive parameter counts, and high sample requirements—limiting their utility in brain network modeling and biomarker discovery for neuropsychiatric disorders. To address these challenges, we propose the Quantum Temporal Series Transformer (QTS-Transformer), the first fMRI analysis framework integrating Quantum Linear Combination of Unitaries (LCU) and Quantum Singular Value Transformation (QSVT). This enables polylogarithmic-time computation, drastically reducing parameter count and alleviating small-sample dependency. Coupled with SHAP-based interpretability, QTS-Transformer achieves performance on par with or exceeding classical Transformers on the ABCD and UK Biobank datasets—particularly demonstrating robustness under limited training samples. Moreover, it successfully identifies ADHD-specific functional connectivity biomarkers, advancing both computational efficiency and clinical interpretability in neuroimaging analytics.
📝 Abstract
Resting-state functional magnetic resonance imaging (fMRI) has emerged as a pivotal tool for revealing intrinsic brain network connectivity and identifying neural biomarkers of neuropsychiatric conditions. However, classical self-attention transformer models--despite their formidable representational power--struggle with quadratic complexity, large parameter counts, and substantial data requirements. To address these barriers, we introduce a Quantum Time-series Transformer, a novel quantum-enhanced transformer architecture leveraging Linear Combination of Unitaries and Quantum Singular Value Transformation. Unlike classical transformers, Quantum Time-series Transformer operates with polylogarithmic computational complexity, markedly reducing training overhead and enabling robust performance even with fewer parameters and limited sample sizes. Empirical evaluation on the largest-scale fMRI datasets from the Adolescent Brain Cognitive Development Study and the UK Biobank demonstrates that Quantum Time-series Transformer achieves comparable or superior predictive performance compared to state-of-the-art classical transformer models, with especially pronounced gains in small-sample scenarios. Interpretability analyses using SHapley Additive exPlanations further reveal that Quantum Time-series Transformer reliably identifies clinically meaningful neural biomarkers of attention-deficit/hyperactivity disorder (ADHD). These findings underscore the promise of quantum-enhanced transformers in advancing computational neuroscience by more efficiently modeling complex spatio-temporal dynamics and improving clinical interpretability.