🤖 AI Summary
This work investigates the emergence mechanism of Neural Collapse Stage I (NC1)—intra-class feature collapse—in three-layer neural networks. For linearly separable data, we establish, for the first time under a data-dependent mean-field regime, a quantitative connection between NC1 and the loss landscape: gradient flow converges to an NC1 solution if and only if the loss value and gradient norm are sufficiently small; thus, NC1 arises naturally from gradient optimization, not from architectural priors. Theoretically, under separability, NC1 holds precisely when the test error vanishes—revealing the fundamental reason behind their empirical co-occurrence. Our analysis integrates mean-field theory, non-convex gradient flow dynamics, and non-convex optimization principles, yielding novel insights into implicit regularization and generalization in neural networks.
📝 Abstract
Neural Collapse is a phenomenon where the last-layer representations of a well-trained neural network converge to a highly structured geometry. In this paper, we focus on its first (and most basic) property, known as NC1: the within-class variability vanishes. While prior theoretical studies establish the occurrence of NC1 via the data-agnostic unconstrained features model, our work adopts a data-specific perspective, analyzing NC1 in a three-layer neural network, with the first two layers operating in the mean-field regime and followed by a linear layer. In particular, we establish a fundamental connection between NC1 and the loss landscape: we prove that points with small empirical loss and gradient norm (thus, close to being stationary) approximately satisfy NC1, and the closeness to NC1 is controlled by the residual loss and gradient norm. We then show that (i) gradient flow on the mean squared error converges to NC1 solutions with small empirical loss, and (ii) for well-separated data distributions, both NC1 and vanishing test loss are achieved simultaneously. This aligns with the empirical observation that NC1 emerges during training while models attain near-zero test error. Overall, our results demonstrate that NC1 arises from gradient training due to the properties of the loss landscape, and they show the co-occurrence of NC1 and small test error for certain data distributions.