TimeFound: A Foundation Model for Time Series Forecasting

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces TimeFound—the first foundation model for zero-shot time series forecasting—designed to address plug-and-play forecasting on unseen datasets across domains and temporal scales. To tackle heterogeneity in time series, TimeFound employs an encoder-decoder Transformer architecture with a novel multi-resolution time chunking strategy that unifies modeling of diverse temporal patterns. It undergoes large-scale self-supervised pretraining on a hybrid corpus of real and synthetic time series data, yielding two variants (200M and 710M parameters), enabling zero-shot transfer to new datasets and arbitrary forecast horizons. Evaluated on multiple out-of-distribution datasets spanning distinct domains, TimeFound achieves state-of-the-art zero-shot forecasting performance—outperforming existing time series foundation models by a significant margin—thereby demonstrating strong generalization capability and practical utility.

Technology Category

Application Category

📝 Abstract
We present TimeFound, an encoder-decoder transformer-based time series foundation model for out-of-the-box zero-shot forecasting. To handle time series data from various domains, TimeFound employs a multi-resolution patching strategy to capture complex temporal patterns at multiple scales. We pre-train our model with two sizes (200M and 710M parameters) on a large time-series corpus comprising both real-world and synthetic datasets. Over a collection of unseen datasets across diverse domains and forecasting horizons, our empirical evaluations suggest that TimeFound can achieve superior or competitive zero-shot forecasting performance, compared to state-of-the-art time series foundation models.
Problem

Research questions and friction points this paper is trying to address.

Develops TimeFound for zero-shot time series forecasting.
Uses multi-resolution patching to capture temporal patterns.
Pre-trained on diverse datasets for superior forecasting performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based encoder-decoder architecture
Multi-resolution patching for temporal patterns
Pre-trained on diverse real-world and synthetic datasets
🔎 Similar Papers
No similar papers found.
Congxi Xiao
Congxi Xiao
USTC
J
Jingbo Zhou
Business Intelligence Lab, Baidu Research
Y
Yixiong Xiao
Business Intelligence Lab, Baidu Research
X
Xinjiang Lu
Business Intelligence Lab, Baidu Research
L
Le Zhang
Business Intelligence Lab, Baidu Research
Hui Xiong
Hui Xiong
Senior Scientist, Candela Corporation
Ultrafast dynamicsatomic molecular physicsfree electron laser