🤖 AI Summary
This work investigates the geometric origin of spontaneously emerging low-dimensional “hyperband” manifolds—characterizing deep neural network training trajectories in the space of probability distributions. Methodologically, it leverages linear models and dynamical systems theory to analytically characterize the phase-transition mechanism underlying hyperband formation. Specifically, it rigorously establishes the joint dependence of manifold emergence on three key control parameters: the eigenvalue decay rate of the input-dependent kernel matrix, the initial weight scaling ratio, and the number of gradient steps. The authors derive and prove an analytic existence criterion for hyperband manifolds, along with tight theoretical bounds. Crucially, these results are extended beyond idealized settings to kernel machines and SGD-trained linear models. The contributions thus provide a rigorous geometric explanation for the dimensional reduction observed in high-dimensional optimization trajectories, while offering novel theoretical insights into generalization behavior and training dynamics.
📝 Abstract
Recent experiments have shown that training trajectories of multiple deep neural networks with different architectures, optimization algorithms, hyper-parameter settings, and regularization methods evolve on a remarkably low-dimensional"hyper-ribbon-like"manifold in the space of probability distributions. Inspired by the similarities in the training trajectories of deep networks and linear networks, we analytically characterize this phenomenon for the latter. We show, using tools in dynamical systems theory, that the geometry of this low-dimensional manifold is controlled by (i) the decay rate of the eigenvalues of the input correlation matrix of the training data, (ii) the relative scale of the ground-truth output to the weights at the beginning of training, and (iii) the number of steps of gradient descent. By analytically computing and bounding the contributions of these quantities, we characterize phase boundaries of the region where hyper-ribbons are to be expected. We also extend our analysis to kernel machines and linear models that are trained with stochastic gradient descent.