How Two-Layer Neural Networks Learn, One (Giant) Step at a Time

📅 2023-05-29
📈 Citations: 27
✹ Influential: 4
📄 PDF
đŸ€– AI Summary
This work investigates how two-layer neural networks adaptively learn the structure of target functions under high-dimensional Gaussian data via a small number of large-batch gradient descent steps, focusing on the interplay between batch size and iteration count, directional heterogeneity in learning difficulty, and required sample complexity. Method: Leveraging concentration inequalities, projection-based conditional analysis, and the Gaussian equivalence principle, we develop a neuron-direction-specific theoretical framework; we introduce the “staircase property” and “transition exponent” to rigorously quantify directional learning difficulty and sharply separate the performance regimes of feature learning and lazy training. Results: We prove that only $O(d)$ samples suffice to efficiently learn multiple target directions across iterations, achieving approximation and generalization errors substantially better than initialization. The learning process exhibits a distinctive staircase-like accuracy improvement—characterized by abrupt performance gains at critical iteration thresholds—demonstrating nontrivial adaptation to underlying function structure.
📝 Abstract
We investigate theoretically how the features of a two-layer neural network adapt to the structure of the target function through a few large batch gradient descent steps, leading to improvement in the approximation capacity with respect to the initialization. We compare the influence of batch size and that of multiple (but finitely many) steps. For a single gradient step, a batch of size $n = mathcal{O}(d)$ is both necessary and sufficient to align with the target function, although only a single direction can be learned. In contrast, $n = mathcal{O}(d^2)$ is essential for neurons to specialize to multiple relevant directions of the target with a single gradient step. Even in this case, we show there might exist ``hard'' directions requiring $n = mathcal{O}(d^ell)$ samples to be learned, where $ell$ is known as the leap index of the target. The picture drastically improves over multiple gradient steps: we show that a batch-size of $n = mathcal{O}(d)$ is indeed enough to learn multiple target directions satisfying a staircase property, where more and more directions can be learned over time. Finally, we discuss how these directions allows to drastically improve the approximation capacity and generalization error over the initialization, illustrating a separation of scale between the random features/lazy regime, and the feature learning regime. Our technical analysis leverages a combination of techniques related to concentration, projection-based conditioning, and Gaussian equivalence which we believe are of independent interest. By pinning down the conditions necessary for specialization and learning, our results highlight the interaction between batch size and number of iterations, and lead to a hierarchical depiction where learning performance exhibits a stairway to accuracy over time and batch size, shedding new light on how neural networks adapt to features of the data.
Problem

Research questions and friction points this paper is trying to address.

Investigates how two-layer neural networks adapt features through gradient descent steps.
Compares batch size impact on learning single vs. multiple target directions.
Analyzes improvement in approximation capacity and generalization error over initialization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large batch gradient descent steps
Neuron specialization in target directions
Staircase property for learning directions
🔎 Similar Papers
No similar papers found.