🤖 AI Summary
This work identifies attractor-merging crises and associated intermittency in randomly initialized recurrent neural networks (RNNs), induced solely by tuning global parameters—without weight training. Method: Leveraging nonlinear dynamical systems analysis, phase-space reconstruction, and bifurcation theory, the study reveals that these phenomena arise from symmetry-constrained evolution of the phase-space structure intrinsic to the network architecture. Contribution/Results: The work establishes, for the first time, a universal link between “attractor mirror embedding” and crisis-induced intermittency, demonstrating that such dynamics constitute an inherent property of reservoir computing—not an artifact of input-driven learning. This challenges the conventional view of reservoirs as static feature mappers and instead positions them as intrinsically rich dynamical substrates. The findings provide a novel theoretical framework and design principles for controllable chaotic signal generation, dynamic system modeling, and brain-inspired computation.
📝 Abstract
Reservoir computing can embed attractors into random neural networks (RNNs), generating a ``mirror'' of a target attractor because of its inherent symmetrical constraints. In these RNNs, we report that an attractor-merging crisis accompanied by intermittency emerges simply by adjusting the global parameter. We further reveal its underlying mechanism through a detailed analysis of the phase-space structure and demonstrate that this bifurcation scenario is intrinsic to a general class of RNNs, independent of training data.