🤖 AI Summary
This work addresses the challenge of efficiently constructing high-dimensional memory representations for time series modeling by proposing a reservoir computing framework based on recurrent quantum feature maps. The approach introduces, for the first time, a recurrent architecture into quantum feature mapping, leveraging a fixed shallow quantum circuit to jointly encode the current input and classical feedback signals. This design achieves significantly enhanced predictive performance using only a small number of qubits. Evaluated on the Mackey-Glass time series prediction task, the model attains lower mean squared error than classical echo state networks and multilayer perceptrons, demonstrating superior memory capacity and robustness against various types of noise, though it exhibits heightened sensitivity to two-qubit gate errors.
📝 Abstract
Reservoir computing promises a fast method for handling large amounts of temporal data. This hinges on constructing a good reservoir--a dynamical system capable of transforming inputs into a high-dimensional representation while remembering properties of earlier data. In this work, we introduce a reservoir based on recurrent quantum feature maps where a fixed quantum circuit is reused to encode both current inputs and a classical feedback signal derived from previous outputs. We evaluate the model on the Mackey-Glass time-series prediction task using our recently introduced CP feature map, and find that it achieves lower mean squared error than standard classical baselines, including echo state networks and multilayer perceptrons, while maintaining compact circuit depth and qubit requirements. We further analyze memory capacity and show that the model effectively retains temporal information, consistent with its forecasting accuracy. Finally, we study the impact of realistic noise and find that performance is robust to several noise channels but remains sensitive to two-qubit gate errors, identifying a key limitation for near-term implementations.