Structure-preserving contrastive learning for spatial time series

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised representation learning methods struggle to preserve fine-grained inter-instance similarity relationships for spatially structured time series (e.g., traffic flow). Method: We propose a dual-structure-preserving regularization framework that jointly models topological similarity and graph geometric similarity via two novel regularizers, coupled with a dynamic weighting mechanism to adaptively balance contrastive learning objectives and structural fidelity constraints. Contribution/Results: Theoretical analysis and empirical evaluation reveal a positive correlation between similarity structure preservation and representation informativeness. Our method achieves state-of-the-art performance across multivariate time-series classification and macro-/micro-scale traffic forecasting tasks. It significantly improves similarity structure fidelity while being encoder-agnostic and plug-and-play. All code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Informative representations enhance model performance and generalisability in downstream tasks. However, learning self-supervised representations for spatially characterised time series, like traffic interactions, poses challenges as it requires maintaining fine-grained similarity relations in the latent space. In this study, we incorporate two structure-preserving regularisers for the contrastive learning of spatial time series: one regulariser preserves the topology of similarities between instances, and the other preserves the graph geometry of similarities across spatial and temporal dimensions. To balance contrastive learning and structure preservation, we propose a dynamic mechanism that adaptively weighs the trade-off and stabilises training. We conduct experiments on multivariate time series classification, as well as macroscopic and microscopic traffic prediction. For all three tasks, our approach preserves the structures of similarity relations more effectively and improves state-of-the-art task performances. The proposed approach can be applied to an arbitrary encoder and is particularly beneficial for time series with spatial or geographical features. Furthermore, this study suggests that higher similarity structure preservation indicates more informative and useful representations. This may help to understand the contribution of representation learning in pattern recognition with neural networks. Our code is made openly accessible with all resulting data at https://github.com/yiru-jiao/spclt.
Problem

Research questions and friction points this paper is trying to address.

Enhance self-supervised learning for spatial time series
Preserve similarity relations in latent space effectively
Improve performance in traffic prediction tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-preserving contrastive learning
Dynamic mechanism balances training
Enhances spatial time series performance
🔎 Similar Papers
No similar papers found.
Yiru Jiao
Yiru Jiao
My papers can be accessed at github.com/Yiru-Jiao/DocumentedKnowledgeSharing
Road user interactiontraffic safetyautonomous road safety
Sander van Cranenburgh
Sander van Cranenburgh
Associate professor, Delft University of Technology
Choice modellingMachine LearningTravel behaviour
S
Simeon Calvert
Department of Transport&Planning, Delft University of Technology, Delft, the Netherlands; CityAI lab, Delft University of Technology, Delft, the Netherlands
H
Hans van Lint
Department of Transport&Planning, Delft University of Technology, Delft, the Netherlands; CityAI lab, Delft University of Technology, Delft, the Netherlands