🤖 AI Summary
This study addresses the limitations of classical machine learning in predicting clinical outcomes from small-sample, highly correlated biomarker data with nonlinear relationships. The authors present the first implementation of quantum reservoir computing (QRC) on a real neutral-atom Rydberg quantum processor, Aquila, integrating SHAP-based feature selection and mutual information analysis to perform classification across multiple medical datasets. The approach is benchmarked against six classical models under various data splits. Experimental results demonstrate that hardware-executed QRC not only significantly outperforms noise-simulated counterparts but also achieves higher test accuracy and greater stability across different dataset partitions. These findings reveal that the structured regularization effect inherently introduced by the quantum hardware effectively enhances model generalization capability.
📝 Abstract
Biomarker-based prediction of clinical outcomes is challenging due to nonlinear relationships, correlated features, and the limited size of many medical datasets. Classical machine-learning methods can struggle under these conditions, motivating the search for alternatives. In this work, we investigate quantum reservoir computing (QRC), using both noiseless emulation and hardware execution on the neutral-atom Rydberg processor \textit{Aquila}. We evaluate performance with six classical machine-learning models and use SHAP to generate feature subsets. We find that models trained on emulated quantum features achieve mean test accuracies comparable to those trained on classical features, but have higher training accuracies and greater variability over data splits, consistent with overfitting. When comparing hardware execution of QRC to noiseless emulation, the models are more robust over different data splits and often exhibit statistically significant improvements in mean test accuracy. This combination of improved accuracy and increased stability is suggestive of a regularising effect induced by hardware execution. To investigate the origin of this behaviour, we examine the statistical differences between hardware and emulated quantum feature distributions. We find that hardware execution applies a structured, time-dependent transformation characterised by compression toward the mean and a progressive reduction in mutual information relative to emulation.