π€ AI Summary
In multivariate time series classification, contrastive learning suffers from intra-class high similarity, while generative methods rely heavily on large-scale data. To address these issues, this paper proposes CoGenTβthe first end-to-end framework that unifies contrastive and generative objectives. CoGenT innovatively co-optimizes the SimCLR contrastive loss and masked autoencoder (MAE) reconstruction loss within a single model, jointly enhancing instance discrimination and data distribution modeling. Experiments across six benchmark datasets demonstrate that CoGenT significantly outperforms single-paradigm baselines: its F1 score improves by up to 59.2% over SimCLR and 14.27% over MAE. The framework achieves both strong discriminative capability and generation robustness, effectively mitigating challenges posed by few-shot learning and intra-class confusion.
π Abstract
Self-supervised learning (SSL) for multivariate time series mainly includes two paradigms: contrastive methods that excel at instance discrimination and generative approaches that model data distributions. While effective individually, their complementary potential remains unexplored. We propose a Contrastive Generative Time series framework (CoGenT), the first framework to unify these paradigms through joint contrastive-generative optimization. CoGenT addresses fundamental limitations of both approaches: it overcomes contrastive learning's sensitivity to high intra-class similarity in temporal data while reducing generative methods' dependence on large datasets. We evaluate CoGenT on six diverse time series datasets. The results show consistent improvements, with up to 59.2% and 14.27% F1 gains over standalone SimCLR and MAE, respectively. Our analysis reveals that the hybrid objective preserves discriminative power while acquiring generative robustness. These findings establish a foundation for hybrid SSL in temporal domains. We will release the code shortly.