Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks

📅 2023-05-11
🏛️ Neural Information Processing Systems
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the theoretical advantage of three-layer neural networks over two-layer networks in hierarchical feature learning, particularly for target functions with inherent hierarchical structure (e.g., quadratic feature compositions). Method: We develop a layer-wise gradient descent analysis framework tailored to hierarchical objectives and establish the first sample complexity upper bound for three-layer networks learning nonlinear hierarchical features. We further construct an optimization-oriented depth-separation instance—provably learnable by a three-layer network but not approximable to nontrivial accuracy by any two-layer network. Results: Our theory yields explicit, verifiable conditions on network width and sample size that guarantee low test error. Under identical distributional and architectural assumptions, the derived sample complexity strictly improves upon all existing guarantees for two-layer networks. This provides the first provable, feature-learning–based theoretical justification for the representational advantage of depth.
📝 Abstract
One of the central questions in the theory of deep learning is to understand how neural networks learn hierarchical features. The ability of deep networks to extract salient features is crucial to both their outstanding generalization ability and the modern deep learning paradigm of pretraining and finetuneing. However, this feature learning process remains poorly understood from a theoretical perspective, with existing analyses largely restricted to two-layer networks. In this work we show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks. We analyze the features learned by a three-layer network trained with layer-wise gradient descent, and present a general purpose theorem which upper bounds the sample complexity and width needed to achieve low test error when the target has specific hierarchical structure. We instantiate our framework in specific statistical learning settings -- single-index models and functions of quadratic features -- and show that in the latter setting three-layer networks obtain a sample complexity improvement over all existing guarantees for two-layer networks. Crucially, this sample complexity improvement relies on the ability of three-layer networks to efficiently learn nonlinear features. We then establish a concrete optimization-based depth separation by constructing a function which is efficiently learnable via gradient descent on a three-layer network, yet cannot be learned efficiently by a two-layer network. Our work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
Problem

Research questions and friction points this paper is trying to address.

Understand hierarchical feature learning in three-layer neural networks
Compare feature learning capabilities between two and three-layer networks
Establish optimization-based depth separation for efficient learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-layer networks learn nonlinear features efficiently
Layer-wise gradient descent improves feature learning
Sample complexity reduced for hierarchical structures
🔎 Similar Papers
No similar papers found.