🤖 AI Summary
This paper introduces TimeFound—the first foundation model for zero-shot time series forecasting—designed to address plug-and-play forecasting on unseen datasets across domains and temporal scales. To tackle heterogeneity in time series, TimeFound employs an encoder-decoder Transformer architecture with a novel multi-resolution time chunking strategy that unifies modeling of diverse temporal patterns. It undergoes large-scale self-supervised pretraining on a hybrid corpus of real and synthetic time series data, yielding two variants (200M and 710M parameters), enabling zero-shot transfer to new datasets and arbitrary forecast horizons. Evaluated on multiple out-of-distribution datasets spanning distinct domains, TimeFound achieves state-of-the-art zero-shot forecasting performance—outperforming existing time series foundation models by a significant margin—thereby demonstrating strong generalization capability and practical utility.
📝 Abstract
We present TimeFound, an encoder-decoder transformer-based time series foundation model for out-of-the-box zero-shot forecasting. To handle time series data from various domains, TimeFound employs a multi-resolution patching strategy to capture complex temporal patterns at multiple scales. We pre-train our model with two sizes (200M and 710M parameters) on a large time-series corpus comprising both real-world and synthetic datasets. Over a collection of unseen datasets across diverse domains and forecasting horizons, our empirical evaluations suggest that TimeFound can achieve superior or competitive zero-shot forecasting performance, compared to state-of-the-art time series foundation models.