🤖 AI Summary
This work addresses the curse of dimensionality in high-dimensional function approximation by investigating the approximation capacity of two-dimensional deep ReLU convolutional neural networks (CNNs) for the Korobov function class. We propose a fully constructive approach: designing a CNN architecture comprising zero-padding, multi-channel convolutional layers, and a fully connected output layer, and explicitly constructing all network parameters under the continuous-weight model. We establish, for the first time, a near-optimal approximation rate for two-dimensional CNNs on the Korobov class—achieving an approximation error bound of $O(N^{-alpha}log N)$, where $N$ denotes the total number of parameters and $alpha$ is determined by the function’s smoothness. This result rigorously characterizes the expressive power of two-dimensional CNNs and reveals their efficient dimensional adaptivity to high-dimensional structured functions. It provides a foundational theoretical guarantee for deep convolutional models in approximating smooth multivariate functions.
📝 Abstract
This paper investigates approximation capabilities of two-dimensional (2D) deep convolutional neural networks (CNNs), with Korobov functions serving as a benchmark. We focus on 2D CNNs, comprising multi-channel convolutional layers with zero-padding and ReLU activations, followed by a fully connected layer. We propose a fully constructive approach for building 2D CNNs to approximate Korobov functions and provide rigorous analysis of the complexity of the constructed networks. Our results demonstrate that 2D CNNs achieve near-optimal approximation rates under the continuous weight selection model, significantly alleviating the curse of dimensionality. This work provides a solid theoretical foundation for 2D CNNs and illustrates their potential for broader applications in function approximation.