🤖 AI Summary
Existing contrastive learning methods for time-series representation learning suffer from challenges in positive/negative sample selection and susceptibility to bias, limiting feature discriminability and generalization. To address this, we propose Frequency-aware Embedding Inference (FEI), the first non-contrastive self-supervised framework for time-series modeling. FEI eliminates explicit positive/negative pair construction and instead captures continuous semantic relationships via a dual-branch prompting mechanism. It employs interpretable frequency-domain masking as prompts to enable bidirectional reasoning between embedding space and frequency domain. The framework supports self-supervised pretraining, linear evaluation, and end-to-end fine-tuning. Extensive experiments across eight benchmark time-series datasets demonstrate that FEI consistently outperforms state-of-the-art contrastive methods on both classification and regression tasks, achieving superior generalization and robustness.
📝 Abstract
Contrastive learning underpins most current self-supervised time series representation methods. The strategy for constructing positive and negative sample pairs significantly affects the final representation quality. However, due to the continuous nature of time series semantics, the modeling approach of contrastive learning struggles to accommodate the characteristics of time series data. This results in issues such as difficulties in constructing hard negative samples and the potential introduction of inappropriate biases during positive sample construction. Although some recent works have developed several scientific strategies for constructing positive and negative sample pairs with improved effectiveness, they remain constrained by the contrastive learning framework. To fundamentally overcome the limitations of contrastive learning, this paper introduces Frequency-masked Embedding Inference (FEI), a novel non-contrastive method that completely eliminates the need for positive and negative samples. The proposed FEI constructs 2 inference branches based on a prompting strategy: 1) Using frequency masking as prompts to infer the embedding representation of the target series with missing frequency bands in the embedding space, and 2) Using the target series as prompts to infer its frequency masking embedding. In this way, FEI enables continuous semantic relationship modeling for time series. Experiments on 8 widely used time series datasets for classification and regression tasks, using linear evaluation and end-to-end fine-tuning, show that FEI significantly outperforms existing contrastive-based methods in terms of generalization. This study provides new insights into self-supervised representation learning for time series. The code is available at https://github.com/USTBInnovationPark/Frequency-masked-Embedding-Inference.