Weights initialization of neural networks for function approximation

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural network function approximation suffers from low training efficiency, poor generalization—especially in extrapolation—and non-transferable parameters. To address these limitations, we propose a reusable initialization framework based on basis-function pretraining. Our method (1) performs unsupervised pretraining of network weights using polynomial basis functions to construct a domain-agnostic parameter prior; (2) introduces an input-domain mapping mechanism that enables adaptive alignment of pretrained parameters to arbitrary function domains; and (3) supports modular training and cross-task parameter transfer. Extensive experiments on one- and two-dimensional function approximation tasks demonstrate that our approach achieves an average 2.3× speedup in training convergence, improves extrapolation accuracy by 37%–61% (measured by error reduction), and enhances model stability. This work establishes a scalable, composable paradigm for function modeling in scientific computing and machine learning.

Technology Category

Application Category

📝 Abstract
Neural network-based function approximation plays a pivotal role in the advancement of scientific computing and machine learning. Yet, training such models faces several challenges: (i) each target function often requires training a new model from scratch; (ii) performance is highly sensitive to architectural and hyperparameter choices; and (iii) models frequently generalize poorly beyond the training domain. To overcome these challenges, we propose a reusable initialization framework based on basis function pretraining. In this approach, basis neural networks are first trained to approximate families of polynomials on a reference domain. Their learned parameters are then used to initialize networks for more complex target functions. To enhance adaptability across arbitrary domains, we further introduce a domain mapping mechanism that transforms inputs into the reference domain, thereby preserving structural correspondence with the pretrained models. Extensive numerical experiments in one- and two-dimensional settings demonstrate substantial improvements in training efficiency, generalization, and model transferability, highlighting the promise of initialization-based strategies for scalable and modular neural function approximation. The full code is made publicly available on Gitee.
Problem

Research questions and friction points this paper is trying to address.

Overcoming neural network retraining for each new target function
Reducing sensitivity to architectural and hyperparameter selections
Improving generalization beyond the original training domain
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretrained basis networks initialize target function models
Domain mapping transforms inputs to reference domain
Reusable framework improves training efficiency and generalization
🔎 Similar Papers
No similar papers found.