🤖 AI Summary
This work investigates the Hamiltonian energy landscape induced by randomly initialized multilayer perceptrons (MLPs) over input space, focusing on the statistical physics structure of near-global minima in the infinite-width limit. Methodologically, it models deep networks as random Hamiltonian systems and employs the replica trick, saddle-point analysis, and asymptotic techniques to analytically compute the entropy density at arbitrary energy levels and to solve the saddle-point equations governing input overlap structure under the Gibbs distribution. The key contribution is the discovery that the activation function—sin, tanh, or ReLU—determines the nature of phase transitions: sin and tanh induce full-step replica symmetry breaking, whereas ReLU exhibits distinct critical behavior. This reveals a universal correspondence between activation function choice and energy landscape complexity. The study thus provides a novel statistical-mechanical framework for understanding initialization dynamics and optimization difficulty in deep neural networks.
📝 Abstract
Neural networks are complex functions of both their inputs and parameters. Much prior work in deep learning theory analyzes the distribution of network outputs at a fixed a set of inputs (e.g. a training dataset) over random initializations of the network parameters. The purpose of this article is to consider the opposite situation: we view a randomly initialized Multi-Layer Perceptron (MLP) as a Hamiltonian over its inputs. For typical realizations of the network parameters, we study the properties of the energy landscape induced by this Hamiltonian, focusing on the structure of near-global minimum in the limit of infinite width. Specifically, we use the replica trick to perform an exact analytic calculation giving the entropy (log volume of space) at a given energy. We further derive saddle point equations that describe the overlaps between inputs sampled iid from the Gibbs distribution induced by the random MLP. For linear activations we solve these saddle point equations exactly. But we also solve them numerically for a variety of depths and activation functions, including $ anh, sin, ext{ReLU}$, and shaped non-linearities. We find even at infinite width a rich range of behaviors. For some non-linearities, such as $sin$, for instance, we find that the landscapes of random MLPs exhibit full replica symmetry breaking, while shallow $ anh$ and ReLU networks or deep shaped MLPs are instead replica symmetric.