🤖 AI Summary
This work addresses the approximation error of ReLU neural networks for Korobov functions under the $L_p$ and $W^1_p$ norms. To mitigate the curse of dimensionality in high-dimensional function approximation, we propose a novel constructive method integrating sparse-grid finite elements with bit-extraction techniques, explicitly prescribing network width and depth to achieve super-optimal convergence rates. Theoretically, we establish near-optimal approximation bounds: $O(N^{-2m})$ in the $L_p$ norm and $O(N^{-2m+2})$ in the $W^1_p$ norm, where $N$ denotes the total number of trainable parameters—substantially improving upon classical $L_infty$ and $H^1$ error bounds. Our analysis demonstrates that the proposed architecture significantly alleviates dimensional dependence, providing both tight theoretical guarantees and an implementable construction paradigm for efficient neural approximation of high-dimensional smooth functions.
📝 Abstract
This paper examines the $L_p$ and $W^1_p$ norm approximation errors of ReLU neural networks for Korobov functions. In terms of network width and depth, we derive nearly optimal super-approximation error bounds of order $2m$ in the $L_p$ norm and order $2m-2$ in the $W^1_p$ norm, for target functions with $L_p$ mixed derivative of order $m$ in each direction. The analysis leverages sparse grid finite elements and the bit extraction technique. Our results improve upon classical lowest order $L_infty$ and $H^1$ norm error bounds and demonstrate that the expressivity of neural networks is largely unaffected by the curse of dimensionality.