On Uniform Weighted Deep Polynomial approximation

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Efficient approximation of nonsmooth, one-sided unbounded functions—such as |x| and x^{1/p}—remains challenging due to their combined local singularities and global asymmetric growth. Method: We propose the first learnable, one-sided weighted deep polynomial framework, integrating analytically constructed one-sided weight functions, learnable deep polynomial bases, and a graph-structured stable parameterization strategy. This design overcomes the algebraic convergence rate limitations inherent in classical polynomial approximations. Contribution/Results: Theoretically and empirically, our method achieves significantly higher approximation accuracy for asymmetric singular functions than Taylor series, Chebyshev expansions, and standard deep polynomials—under identical parameter budgets. It is the first approach to jointly model local nonsmoothness and global growth behavior while delivering high-fidelity approximation, thereby enabling principled representation of functions with heterogeneous regularity and unbounded asymmetry.

Technology Category

Application Category

📝 Abstract
It is a classical result in rational approximation theory that certain non-smooth or singular functions, such as $|x|$ and $x^{1/p}$, can be efficiently approximated using rational functions with root-exponential convergence in terms of degrees of freedom cite{Sta, GN}. In contrast, polynomial approximations admit only algebraic convergence by Jackson's theorem cite{Lub2}. Recent work shows that composite polynomial architectures can recover exponential approximation rates even without smoothness cite{KY}. In this work, we introduce and analyze a class of weighted deep polynomial approximants tailored for functions with asymmetric behavior-growing unbounded on one side and decaying on the other. By multiplying a learnable deep polynomial with a one-sided weight, we capture both local non-smoothness and global growth. We show numerically that this framework outperforms Taylor, Chebyshev, and standard deep polynomial approximants, even when all use the same number of parameters. To optimize these approximants in practice, we propose a stable graph-based parameterization strategy building on cite{Jar}.
Problem

Research questions and friction points this paper is trying to address.

Approximating non-smooth functions with deep polynomials
Handling asymmetric growth and decay in functions
Improving accuracy over classical polynomial methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weighted deep polynomial approximants for asymmetric functions
Learnable deep polynomial with one-sided weight
Stable graph-based parameterization strategy
🔎 Similar Papers
No similar papers found.
Kingsley Yeon
Kingsley Yeon
University of Chicago
numerical analysisapproximation theoryneural network theory
S
Steven B. Damelin
Department of Mathematics, ZBMATH-OPEN Leibniz Institute for Information Infrastructure, Germany