π€ AI Summary
To address the limited capability of conventional reservoir computing (RC) models in modeling long-term dependencies, this paper proposes the Residual Reservoir Memory Network (RRMN)βa training-free recurrent architecture that integrates a linear memory reservoir with a nonlinear reservoir and introduces residual orthogonal connections along the temporal dimension to explicitly enhance long-range information propagation. Orthogonal constraints on the reservoir weights improve the linear stability of reservoir state dynamics, enabling flexible residual configurations. Theoretical analysis characterizes the networkβs dynamical properties via linear stability analysis. Extensive experiments on multiple time-series forecasting and 1D pixel classification benchmarks demonstrate that RRMN consistently outperforms classical RC variants, achieving significant improvements in prediction accuracy, robustness to noise and hyperparameter variations, and long-range dependency capture.
π Abstract
We introduce a novel class of untrained Recurrent Neural Networks (RNNs) within the Reservoir Computing (RC) paradigm, called Residual Reservoir Memory Networks (ResRMNs). ResRMN combines a linear memory reservoir with a non-linear reservoir, where the latter is based on residual orthogonal connections along the temporal dimension for enhanced long-term propagation of the input. The resulting reservoir state dynamics are studied through the lens of linear stability analysis, and we investigate diverse configurations for the temporal residual connections. The proposed approach is empirically assessed on time-series and pixel-level 1-D classification tasks. Our experimental results highlight the advantages of the proposed approach over other conventional RC models.