Can Synthetic Data Improve Symbolic Regression Extrapolation Performance?

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Symbolic regression (SR) suffers from poor generalization in extrapolation tasks. To address this, we propose a synthetic data augmentation method that synergistically combines kernel density estimation (KDE) and knowledge distillation: KDE first identifies sparsely sampled regions in the input space; then, a teacher model—such as Gaussian process emulation (GPe), neural network (NN), or random forest (RF)—generates targeted synthetic samples in those regions to train a genetic programming (GP)-based student model. Experiments across six benchmark datasets demonstrate that our approach significantly improves GP’s extrapolation accuracy, with the GPe→GPp configuration yielding the best performance; interpolation accuracy remains largely unaffected. This work is the first to empirically establish the critical role of teacher model selection in enhancing SR extrapolation capability. Moreover, we uncover spatial heterogeneity in model prediction errors—an insight that informs a novel paradigm for optimizing SR extrapolation.

Technology Category

Application Category

📝 Abstract
Many machine learning models perform well when making predictions within the training data range, but often struggle when required to extrapolate beyond it. Symbolic regression (SR) using genetic programming (GP) can generate flexible models but is prone to unreliable behaviour in extrapolation. This paper investigates whether adding synthetic data can help improve performance in such cases. We apply Kernel Density Estimation (KDE) to identify regions in the input space where the training data is sparse. Synthetic data is then generated in those regions using a knowledge distillation approach: a teacher model generates predictions on new input points, which are then used to train a student model. We evaluate this method across six benchmark datasets, using neural networks (NN), random forests (RF), and GP both as teacher models (to generate synthetic data) and as student models (trained on the augmented data). Results show that GP models can often improve when trained on synthetic data, especially in extrapolation areas. However, the improvement depends on the dataset and teacher model used. The most important improvements are observed when synthetic data from GPe is used to train GPp in extrapolation regions. Changes in interpolation areas show only slight changes. We also observe heterogeneous errors, where model performance varies across different regions of the input space. Overall, this approach offers a practical solution for better extrapolation. Note: An earlier version of this work appeared in the GECCO 2025 Workshop on Symbolic Regression. This arXiv version corrects several parts of the original submission.
Problem

Research questions and friction points this paper is trying to address.

Improves symbolic regression extrapolation using synthetic data
Addresses unreliable behavior of genetic programming in extrapolation
Evaluates synthetic data impact across multiple benchmark datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Kernel Density Estimation to identify sparse data regions
Generating synthetic data via knowledge distillation from teacher models
Training student models on augmented data for improved extrapolation
🔎 Similar Papers
No similar papers found.