🤖 AI Summary
This work investigates gradient training dynamics of deep residual networks (ResNets) under standard random initialization, focusing on how depth $L$, input dimension $D$, and hidden width $M$ jointly influence convergence and feature learning behavior. We introduce the Neural Mean-Field ODE framework to characterize the average-field dynamics in the deep divergent limit ($L o infty$), enabling the first rigorous identification of a phase transition between full feature learning and lazy training regimes. We derive a tight generalization error bound of $O(1/L + sqrt{D}/sqrt{LM})$ and rigorously establish that complete feature learning requires residual scaling of $Theta(sqrt{D}/(LM))$. By integrating chaos propagation analysis with stochastic differential equation techniques, our framework unifies the understanding of how width and scaling jointly govern training dynamics. The results provide principled theoretical guidance for initialization strategies and architectural design of deep ResNets.
📝 Abstract
We study the gradient-based training of large-depth residual networks (ResNets) from standard random initializations. We show that with a diverging depth $L$, a fixed embedding dimension $D$, and an arbitrary hidden width $M$, the training dynamics converges to a Neural Mean ODE training dynamics. Remarkably, the limit is independent of the scaling of $M$, covering practical cases of, say, Transformers, where $M$ (the number of hidden units or attention heads per layer) is typically of the order of $D$. For a residual scale $Θ_Dig(fracα{LM}ig)$, we obtain the error bound $O_Dig(frac{1}{L}+ fracα{sqrt{LM}}ig)$ between the model's output and its limit after a fixed number gradient of steps, and we verify empirically that this rate is tight. When $α=Θ(1)$, the limit exhibits complete feature learning, i.e. the Mean ODE is genuinely non-linearly parameterized. In contrast, we show that $α o infty$ yields a lazy ODE regime where the Mean ODE is linearly parameterized. We then focus on the particular case of ResNets with two-layer perceptron blocks, for which we study how these scalings depend on the embedding dimension $D$. We show that for this model, the only residual scale that leads to complete feature learning is $Θig(frac{sqrt{D}}{LM}ig)$. In this regime, we prove the error bound $Oig(frac{1}{L}+ frac{sqrt{D}}{sqrt{LM}}ig)$ between the ResNet and its limit after a fixed number of gradient steps, which is also empirically tight. Our convergence results rely on a novel mathematical perspective on ResNets : (i) due to the randomness of the initialization, the forward and backward pass through the ResNet behave as the stochastic approximation of certain mean ODEs, and (ii) by propagation of chaos (that is, asymptotic independence of the units) this behavior is preserved through the training dynamics.