🤖 AI Summary
Existing pre-trained Transformer-based symbolic regression models are evaluated primarily on in-distribution (ID) data, overlooking the critical challenge of out-of-distribution (OOD) generalization under real-world data shifts. Method: To address this gap, we introduce the first multi-dimensional OOD benchmark for symbolic regression, systematically varying function complexity, input dimensionality, and noise patterns, coupled with a standardized empirical evaluation framework. Contribution/Results: Extensive experiments reveal that while models achieve strong ID performance, they suffer consistent and substantial degradation across all OOD settings—uncovering a pronounced “generalization gap.” This finding exposes a fundamental limitation of current pre-training paradigms for practical deployment and establishes an empirically grounded benchmark and evaluation protocol to guide future work on improving robustness and cross-distribution transferability of symbolic regression models.
📝 Abstract
Symbolic regression algorithms search a space of mathematical expressions for formulas that explain given data. Transformer-based models have emerged as a promising, scalable approach shifting the expensive combinatorial search to a large-scale pre-training phase. However, the success of these models is critically dependent on their pre-training data. Their ability to generalize to problems outside of this pre-training distribution remains largely unexplored. In this work, we conduct a systematic empirical study to evaluate the generalization capabilities of pre-trained, transformer-based symbolic regression. We rigorously test performance both within the pre-training distribution and on a series of out-of-distribution challenges for several state of the art approaches. Our findings reveal a significant dichotomy: while pre-trained models perform well in-distribution, the performance consistently degrades in out-of-distribution scenarios. We conclude that this generalization gap is a critical barrier for practitioners, as it severely limits the practical use of pre-trained approaches for real-world applications.