Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning

๐Ÿ“… 2024-02-12
๐Ÿ›๏ธ International Conference on Machine Learning
๐Ÿ“ˆ Citations: 5
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Convolutional neural networks (CNNs) empirically exhibit strong frequency biases and hierarchical processing, yet the structural principles underlying their representational efficiency remain poorly understood. Method: We introduce the convolutional bottleneck (CBN)โ€”a naturally emergent architectural motif wherein early layers compress inputs into sparse frequency-channel representations, and later layers reconstruct outputs from this compressed representation. We define the CBN rank to quantify the number and types of critical frequencies preserved in the bottleneck, and integrate Fourier-domain analysis, parameter complexity theory, stability-driven derivations of activation/weight structures, and empirical validation. Contribution/Results: We prove that parameter norm scales linearly with network depth and CBN rank, tightly linking it to the regularity of the target function. Crucially, we provide the first optimization-efficiencyโ€“driven theoretical justification for downsampling: optimally parameter-efficient CNNs must possess a CBN structure. Our framework successfully decodes learned frequency preferences and functional mechanisms in multi-task CNNs, establishing a new paradigm for understanding CNN representation learning and designing efficient architectures.

Technology Category

Application Category

๐Ÿ“ Abstract
We describe the emergence of a Convolution Bottleneck (CBN) structure in CNNs, where the network uses its first few layers to transform the input representation into a representation that is supported only along a few frequencies and channels, before using the last few layers to map back to the outputs. We define the CBN rank, which describes the number and type of frequencies that are kept inside the bottleneck, and partially prove that the parameter norm required to represent a function $f$ scales as depth times the CBN rank $f$. We also show that the parameter norm depends at next order on the regularity of $f$. We show that any network with almost optimal parameter norm will exhibit a CBN structure in both the weights and - under the assumption that the network is stable under large learning rate - the activations, which motivates the common practice of down-sampling; and we verify that the CBN results still hold with down-sampling. Finally we use the CBN structure to interpret the functions learned by CNNs on a number of tasks.
Problem

Research questions and friction points this paper is trying to address.

Explains emergence of Convolution Bottleneck structure in CNNs.
Defines CBN rank to describe frequency retention in networks.
Links parameter norm to function regularity and network depth.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Convolution Bottleneck (CBN) structure emergence
CBN rank defines frequency and channel retention
Parameter norm scales with depth and CBN rank
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yuxiao Wen
Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA
Arthur Jacot
Arthur Jacot
Assistant Professor, Courant Institute of Mathematical Sciences, NYU
Deep Learning