An Analytical Characterization of Sloppiness in Neural Networks: Insights from Linear Models

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the geometric origin of spontaneously emerging low-dimensional “hyperband” manifolds—characterizing deep neural network training trajectories in the space of probability distributions. Methodologically, it leverages linear models and dynamical systems theory to analytically characterize the phase-transition mechanism underlying hyperband formation. Specifically, it rigorously establishes the joint dependence of manifold emergence on three key control parameters: the eigenvalue decay rate of the input-dependent kernel matrix, the initial weight scaling ratio, and the number of gradient steps. The authors derive and prove an analytic existence criterion for hyperband manifolds, along with tight theoretical bounds. Crucially, these results are extended beyond idealized settings to kernel machines and SGD-trained linear models. The contributions thus provide a rigorous geometric explanation for the dimensional reduction observed in high-dimensional optimization trajectories, while offering novel theoretical insights into generalization behavior and training dynamics.

Technology Category

Application Category

📝 Abstract
Recent experiments have shown that training trajectories of multiple deep neural networks with different architectures, optimization algorithms, hyper-parameter settings, and regularization methods evolve on a remarkably low-dimensional"hyper-ribbon-like"manifold in the space of probability distributions. Inspired by the similarities in the training trajectories of deep networks and linear networks, we analytically characterize this phenomenon for the latter. We show, using tools in dynamical systems theory, that the geometry of this low-dimensional manifold is controlled by (i) the decay rate of the eigenvalues of the input correlation matrix of the training data, (ii) the relative scale of the ground-truth output to the weights at the beginning of training, and (iii) the number of steps of gradient descent. By analytically computing and bounding the contributions of these quantities, we characterize phase boundaries of the region where hyper-ribbons are to be expected. We also extend our analysis to kernel machines and linear models that are trained with stochastic gradient descent.
Problem

Research questions and friction points this paper is trying to address.

Characterizing low-dimensional training trajectories in neural networks
Analyzing hyper-ribbon geometry via input correlation eigenvalues
Extending analysis to kernel machines and SGD-trained linear models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing hyper-ribbon manifolds in linear networks
Using dynamical systems theory for geometry control
Extending analysis to kernel machines and SGD
🔎 Similar Papers
No similar papers found.
Jialin Mao
Jialin Mao
University of Pennsylvania
Itay Griniasty
Itay Griniasty
School of Mechanical Engineering, Tel Aviv University
Soft matter physicsInformation geometry
Y
Yan Sun
University of Pennsylvania
M
Mark K. Transtrum
Brigham Young University
J
James P. Sethna
Cornell University