Early alignment in two-layer networks training is a two-edged sword

📅 2024-01-19
🏛️ arXiv.org
📈 Citations: 14
Influential: 3
📄 PDF
🤖 AI Summary
This work investigates the dual role of neuron alignment in early training of two-layer ReLU networks. Under small initialization, hidden neurons rapidly align toward critical data-dependent directions—inducing implicit sparse representations while simultaneously impeding global optimization and trapping overparameterized networks at spurious stationary points with nonzero loss, preventing convergence to the global minimum. We establish the first quantitative causal link between alignment magnitude and convergence failure via gradient flow dynamics modeling, initialization sensitivity analysis, and an explicit counterexample. Theoretically, we prove that such alignment constitutes a sufficient condition for global convergence failure under standard data distributions. Our results reveal that early alignment is not merely a source of inductive bias but also a fundamental optimization obstacle, offering a new perspective on deep learning training dynamics.

Technology Category

Application Category

📝 Abstract
Training neural networks with first order optimisation methods is at the core of the empirical success of deep learning. The scale of initialisation is a crucial factor, as small initialisations are generally associated to a feature learning regime, for which gradient descent is implicitly biased towards simple solutions. This work provides a general and quantitative description of the early alignment phase, originally introduced by Maennel et al. (2018) . For small initialisation and one hidden ReLU layer networks, the early stage of the training dynamics leads to an alignment of the neurons towards key directions. This alignment induces a sparse representation of the network, which is directly related to the implicit bias of gradient flow at convergence. This sparsity inducing alignment however comes at the expense of difficulties in minimising the training objective: we also provide a simple data example for which overparameterised networks fail to converge towards global minima and only converge to a spurious stationary point instead.
Problem

Research questions and friction points this paper is trying to address.

Analyzing early alignment effects in two-layer ReLU networks
Studying implicit bias of gradient flow towards sparse representations
Identifying training convergence failures in overparameterized networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Small initialization induces feature learning
Neuron alignment towards key sparse directions
Implicit gradient bias causes sparse representation
🔎 Similar Papers
No similar papers found.