🤖 AI Summary
This work investigates the dynamical evolution of weights in stochastic gradient descent (SGD), aiming to clarify the scaling relationship between learning rate and batch size and its universality. We propose a novel modeling framework grounded in random matrix theory and Dyson Brownian motion—marking the first systematic application of Dyson processes to SGD weight dynamics—and rigorously derive the linear learning-rate–batch-size scaling law. Our analysis reveals a fundamental duality in weight evolution: a universal, algorithm-determined dynamical component and a non-universal, architecture-dependent correction term. Experiments on Gaussian restricted Boltzmann machines and linear single-hidden-layer networks quantitatively validate the scaling law and precisely disentangle the universal and non-universal contributions. This work establishes a unified stochastic-process perspective for understanding optimization dynamics in deep learning.
📝 Abstract
Investigating the dynamics of learning in machine learning algorithms is of paramount importance for understanding how and why an approach may be successful. The tools of physics and statistics provide a robust setting for such investigations. Here we apply concepts from random matrix theory to describe stochastic weight matrix dynamics, using the framework of Dyson Brownian motion. We derive the linear scaling rule between the learning rate (step size) and the batch size, and identify universal and non-universal aspects of weight matrix dynamics. We test our findings in the (near-)solvable case of the Gaussian Restricted Boltzmann Machine and in a linear one-hidden-layer neural network.