DeCoP: Enhancing Self-Supervised Time Series Representation with Dependency Controlled Pre-training

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Time-series pretraining faces core challenges in modeling dynamic temporal dependencies, mitigating distribution shifts, and eliminating spurious correlations—leading to degraded generalization. To address these, we propose DeCoP (Dependency-Controlled Pretraining), a novel framework that introduces instance-level block normalization and hierarchical dependency control learning. DeCoP explicitly models dynamic cross-scale block dependencies, decouples short- and long-term pattern interactions, and suppresses spurious correlations. Our method integrates instance-level contrastive learning, multi-scale dependency modeling, and hierarchical contrastive strategies to achieve robust representation learning. Evaluated on ten benchmark datasets, DeCoP consistently outperforms state-of-the-art methods: on ETTh1, it reduces MSE by 3% over PatchTST while cutting FLOPs by 63% (requiring only 37% of PatchTST’s computational cost), demonstrating significant improvements in both accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Modeling dynamic temporal dependencies is a critical challenge in time series pre-training, which evolve due to distribution shifts and multi-scale patterns. This temporal variability severely impairs the generalization of pre-trained models to downstream tasks. Existing frameworks fail to capture the complex interactions of short- and long-term dependencies, making them susceptible to spurious correlations that degrade generalization. To address these limitations, we propose DeCoP, a Dependency Controlled Pre-training framework that explicitly models dynamic, multi-scale dependencies by simulating evolving inter-patch dependencies. At the input level, DeCoP introduces Instance-wise Patch Normalization (IPN) to mitigate distributional shifts while preserving the unique characteristics of each patch, creating a robust foundation for representation learning. At the latent level, a hierarchical Dependency Controlled Learning (DCL) strategy explicitly models inter-patch dependencies across multiple temporal scales, with an Instance-level Contrastive Module (ICM) enhances global generalization by learning instance-discriminative representations from time-invariant positive pairs. DeCoP achieves state-of-the-art results on ten datasets with lower computing resources, improving MSE by 3% on ETTh1 over PatchTST using only 37% of the FLOPs.
Problem

Research questions and friction points this paper is trying to address.

Modeling dynamic temporal dependencies in time series
Addressing distribution shifts and multi-scale pattern challenges
Improving generalization of pre-trained models to downstream tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dependency Controlled Pre-training framework
Instance-wise Patch Normalization mitigates shifts
Hierarchical Dependency Controlled Learning strategy
🔎 Similar Papers
No similar papers found.
Y
Yuemin Wu
USYD
Z
Zhongze Wu
CSU
X
Xiu Su
CSU
F
Feng Yang
SEU
Hongyan Xu
Hongyan Xu
Tianjin University
Text GenerationRecommender SystemGraph Learning
X
Xi Lin
SJTU
W
Wenti Huang
HNUST
Shan You
Shan You
SenseTime Research
deep learningmultimodal LLMedge AI
C
Chang Xu
USYD