TimeCapsule: Solving the Jigsaw Puzzle of Long-Term Time Series Forecasting with Compressed Predictive Representations

๐Ÿ“… 2025-04-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenges of high parameter redundancy, fragmented multi-scale modeling, and the trade-off between efficiency and accuracy in long-term time series forecasting (LTSF), this paper proposes a unified framework based on high-dimensional information compression. We formulate time series as a three-dimensional tensor spanning time, variables, and hierarchical levels, enabling joint multi-modal dependency modeling and dimensionality compression via mode-wise products. We introduce the first compression-prediction paradigm that jointly suppresses redundancy and captures multi-scale representations, and design a Joint Embedding Prediction Architecture (JEPA) that performs forecasting directly within the compressed latent spaceโ€”ensuring end-to-end alignment between representation learning and forecasting objectives. Evaluated on multiple authoritative LTSF benchmarks, our method achieves state-of-the-art performance, significantly outperforming complex deep models while surpassing mainstream linear baselines in computational efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent deep learning models for Long-term Time Series Forecasting (LTSF) often emphasize complex, handcrafted designs, while simpler architectures like linear models or MLPs have often outperformed these intricate solutions. In this paper, we revisit and organize the core ideas behind several key techniques, such as redundancy reduction and multi-scale modeling, which are frequently employed in advanced LTSF models. Our goal is to streamline these ideas for more efficient deep learning utilization. To this end, we introduce TimeCapsule, a model built around the principle of high-dimensional information compression that unifies these techniques in a generalized yet simplified framework. Specifically, we model time series as a 3D tensor, incorporating temporal, variate, and level dimensions, and leverage mode production to capture multi-mode dependencies while achieving dimensionality compression. We propose an internal forecast within the compressed representation domain, supported by the Joint-Embedding Predictive Architecture (JEPA), to monitor the learning of predictive representations. Extensive experiments on challenging benchmarks demonstrate the versatility of our method, showing that TimeCapsule can achieve state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Simplifying complex LTSF models for better efficiency
Unifying techniques via high-dimensional information compression
Improving predictive accuracy in time series forecasting
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-dimensional information compression for time series
3D tensor modeling with multi-mode dependencies
Joint-Embedding Predictive Architecture for monitoring
๐Ÿ”Ž Similar Papers
No similar papers found.
Yihang Lu
Yihang Lu
University of Science and Technology of China, Hefei Institutes of Physical Science
Spatiotemporal dataTime SeriesDynamical SystemsTensor Methods
Y
Yangyang Xu
Institute of Intelligent Machines, Hefei Institutes of Physical Science, Hefei, China; University of Science and Technology of China, Hefei, China
Q
Qitao Qing
University of Science and Technology of China, Hefei, China
X
Xianwei Meng
Institute of Intelligent Machines, Hefei Institutes of Physical Science, Hefei, China