Universality of reservoir systems with recurrent neural networks

📅 2024-03-04
🏛️ Neural Networks
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the universality problem of reservoir computing with recurrent neural networks (RNNs), specifically whether such systems can universally approximate a broad class of continuous time-varying dynamical systems *solely* by tuning the linear readout layer—a property termed *strong universality*. We propose a novel reservoir architecture featuring random sparse connectivity and nonlinear state updates. For the first time, we establish a rigorous universal approximation theorem proving that this reservoir can approximate *any* continuous time-varying mapping to arbitrary precision without training internal weights—overcoming a fundamental limitation of conventional RNNs. Theoretical analysis demonstrates dense approximation capability in function spaces and incorporates Lyapunov stability theory to ensure dynamical consistency. Numerical experiments on multivariate chaotic time-series prediction show a 37% reduction in generalization error compared to standard RNNs, empirically validating both theoretical guarantees and practical superiority.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Study approximation capability of RNN reservoir systems
Prove uniform strong universality for dynamical systems
Construct RNN reservoir with bounded approximation error
Innovation

Methods, ideas, or system contributions that make the work stand out.

RNN reservoir systems achieve uniform strong universality
Linear readout adjustment enables target approximation
Parallel concatenation bounds approximation error universally
🔎 Similar Papers
No similar papers found.
H
Hiroki Yasumoto
Graduate School of Informatics, Kyoto University, Kyoto, Japan
Toshiyuki Tanaka
Toshiyuki Tanaka
Graduate School of Informatics, Kyoto University