🤖 AI Summary
Neural network function approximation suffers from low training efficiency, poor generalization—especially in extrapolation—and non-transferable parameters. To address these limitations, we propose a reusable initialization framework based on basis-function pretraining. Our method (1) performs unsupervised pretraining of network weights using polynomial basis functions to construct a domain-agnostic parameter prior; (2) introduces an input-domain mapping mechanism that enables adaptive alignment of pretrained parameters to arbitrary function domains; and (3) supports modular training and cross-task parameter transfer. Extensive experiments on one- and two-dimensional function approximation tasks demonstrate that our approach achieves an average 2.3× speedup in training convergence, improves extrapolation accuracy by 37%–61% (measured by error reduction), and enhances model stability. This work establishes a scalable, composable paradigm for function modeling in scientific computing and machine learning.
📝 Abstract
Neural network-based function approximation plays a pivotal role in the advancement of scientific computing and machine learning. Yet, training such models faces several challenges: (i) each target function often requires training a new model from scratch; (ii) performance is highly sensitive to architectural and hyperparameter choices; and (iii) models frequently generalize poorly beyond the training domain. To overcome these challenges, we propose a reusable initialization framework based on basis function pretraining. In this approach, basis neural networks are first trained to approximate families of polynomials on a reference domain. Their learned parameters are then used to initialize networks for more complex target functions. To enhance adaptability across arbitrary domains, we further introduce a domain mapping mechanism that transforms inputs into the reference domain, thereby preserving structural correspondence with the pretrained models. Extensive numerical experiments in one- and two-dimensional settings demonstrate substantial improvements in training efficiency, generalization, and model transferability, highlighting the promise of initialization-based strategies for scalable and modular neural function approximation. The full code is made publicly available on Gitee.