Lattice-based Deep Neural Networks: Regularity and Tailored Regularization

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of deteriorating generalization error in deep neural networks when approximating high-dimensional functions, where error bounds typically grow with input dimensionality. To mitigate this curse of dimensionality, the authors propose a tailored deep learning framework that combines lattice rule-based sampling (a quasi-Monte Carlo method) for training points with a custom regularization scheme informed by the target function’s regularity. By incorporating smooth activation functions and parameter constraints aligned with the function’s smoothness, the approach yields a model whose generalization error bound features a dimension-independent constant. Theoretical analysis establishes this favorable scaling, and numerical experiments demonstrate that the proposed method significantly outperforms standard ℓ² regularization, achieving superior approximation accuracy in high-dimensional settings.

Technology Category

Application Category

📝 Abstract
This survey article is concerned with the application of lattice rules to Deep Neural Networks (DNNs), lattice rules being a family of quasi-Monte Carlo methods. They have demonstrated effectiveness in various contexts for high-dimensional integration and function approximation. They are extremely easy to implement thanks to their very simple formulation -- all that is required is a good integer generating vector of length matching the dimensionality of the problem. In recent years there has been a burst of research activities on the application and theory of DNNs. We review our recent article on using lattice rules as training points for DNNs with a smooth activation function, where we obtained explicit regularity bounds of the DNNs. By imposing restrictions on the network parameters to match the regularity features of the target function, we prove that DNNs with tailored lattice training points can achieve good theoretical generalization error bounds, with implied constants independent of the input dimension. We also demonstrate numerically that DNNs trained with our tailored regularization perform significantly better than with standard $\ell_2$ regularization.
Problem

Research questions and friction points this paper is trying to address.

lattice rules
Deep Neural Networks
regularity
generalization error
quasi-Monte Carlo
Innovation

Methods, ideas, or system contributions that make the work stand out.

lattice rules
deep neural networks
tailored regularization
quasi-Monte Carlo
generalization error
🔎 Similar Papers
No similar papers found.