🤖 AI Summary
This paper addresses the complexity characterization of fully connected deep neural networks, aiming to elucidate how depth growth and activation function choice (e.g., ReLU) affect generalization and structural properties. We propose a complexity analysis framework based on the angular power spectrum of the induced Gaussian process and establish, under the infinite-depth limit with fixed width-to-depth ratio, the asymptotic distribution theory of the spectral sequence. We introduce, for the first time, three spectral-structure-based complexity criteria—low-disorder, sparse, and high-disorder—rigorously uncovering activation-specific properties such as the intrinsic sparsity of ReLU networks. Integrating weak convergence of stochastic processes, spherical harmonic analysis, and asymptotic statistics, we derive verifiable spectral characterizations and classification criteria. All theoretical findings are systematically validated through comprehensive numerical experiments.
📝 Abstract
It is well-known that randomly initialized, push-forward, fully-connected neural networks weakly converge to isotropic Gaussian processes, in the limit where the width of all layers goes to infinity. In this paper, we propose to use the angular power spectrum of the limiting field to characterize the complexity of the network architecture. In particular, we define sequences of random variables associated with the angular power spectrum, and provide a full characterization of the network complexity in terms of the asymptotic distribution of these sequences as the depth diverges. On this basis, we classify neural networks as low-disorder, sparse, or high-disorder; we show how this classification highlights a number of distinct features for standard activation functions, and in particular, sparsity properties of ReLU networks. Our theoretical results are also validated by numerical simulations.