π€ AI Summary
Current deep learning time series forecasting models exhibit unstable performance, primarily due to their black-box nature and the lack of interpretability, quantifiability, and controllability in existing evaluation frameworks. To address this, we propose SynTSBenchβa novel synthetic-data-driven evaluation paradigm that enables systematic assessment of model capabilities in capturing fundamental temporal patterns (e.g., seasonality, trend, and abrupt changes) via programmable time-feature configuration. Our contributions include: (1) a three-dimensional analytical framework comprising temporal feature decomposition with capability mapping, quantitative robustness analysis against anomalies, and comparison against theoretically optimal baselines; (2) integration of noise injection, anomaly simulation, and analytical ground-truth modeling to enable multi-dimensional, fine-grained, and reproducible evaluation. Extensive experiments reveal substantial performance gaps between state-of-the-art models and theoretical optima across critical temporal patterns.
π Abstract
Recent advances in deep learning have driven rapid progress in time series forecasting, yet many state-of-the-art models continue to struggle with robust performance in real-world applications, even when they achieve strong results on standard benchmark datasets. This persistent gap can be attributed to the black-box nature of deep learning architectures and the inherent limitations of current evaluation frameworks, which frequently lack the capacity to provide clear, quantitative insights into the specific strengths and weaknesses of different models, thereby complicating the selection of appropriate models for particular forecasting scenarios. To address these issues, we propose a synthetic data-driven evaluation paradigm, SynTSBench, that systematically assesses fundamental modeling capabilities of time series forecasting models through programmable feature configuration. Our framework isolates confounding factors and establishes an interpretable evaluation system with three core analytical dimensions: (1) temporal feature decomposition and capability mapping, which enables systematic evaluation of model capacities to learn specific pattern types; (2) robustness analysis under data irregularities, which quantifies noise tolerance thresholds and anomaly recovery capabilities; and (3) theoretical optimum benchmarking, which establishes performance boundaries for each pattern type-enabling direct comparison between model predictions and mathematical optima. Our experiments show that current deep learning models do not universally approach optimal baselines across all types of temporal features.The code is available at https://github.com/TanQitai/SynTSBench