Thermodynamic limit in learning period three

๐Ÿ“… 2024-05-12
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates whether a neural network can learn arbitrary one-dimensional periodic patterns from only three training points. Leveraging the thermodynamic limit of random neural networks, we formulate an interpolation-based learning dynamics model and employ stability analysis, bifurcation theory, and topological conjugacy techniques. We rigorously prove, for the first time, that three-point training induces the emergence of a full-period attractor; under quadratic interpolation, a universal post-learning bifurcation arises, topologically conjugate to the logistic map. Weight scaling exhibits singular behavior intimately tied to symmetry breaking induced by period-3 orbits. Crucially, almost all learned periodic orbits are unstable; yet the network spontaneously generates a characteristic attractor encompassing untrained periodsโ€”revealing an implicit mechanism for chaotic dynamics generation under minimal supervision.

Technology Category

Application Category

๐Ÿ“ Abstract
A continuous one-dimensional map with period three includes all periods. This raises the following question: Can we obtain any types of periodic orbits solely by learning three data points? In this paper, we report the answer to be yes. Considering a random neural network in its thermodynamic limit, we first show that almost all learned periods are unstable and each network has its characteristic attractors (which can even be untrained ones). The latently acquired dynamics, which are unstable within the trained network, serve as a foundation for the diversity of characteristic attractors and may even lead to the emergence of attractors of all periods after learning. When the neural network interpolation is quadratic, a universal post-learning bifurcation scenario appears, which is consistent with a topological conjugacy between the trained network and the classical logistic map. In addition to universality, we explore specific properties of certain networks, including the singular behavior at the infinite scale of weights limit and the symmetry in learning period three.
Problem

Research questions and friction points this paper is trying to address.

Neural Networks
Pattern Learning
Stability and Diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Network
Pattern Recognition
Complex Dynamics
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yuichiro Terasaki
Department of Mechano-Informatics, The University of Tokyo, Tokyo 113-8656, Japan
Kohei Nakajima
Kohei Nakajima
University of Tokyo
nonlinear dynamicsinformation theoryreservoir computingsoft roboticsembodiment