🤖 AI Summary
In multivariate time series forecasting, channel mixing (CM) struggles to capture variable-specific temporal patterns, while channel independence (CI) neglects cross-variable dependencies; existing hybrid strategies suffer from limited generalizability and interpretability. To address this, we propose C3RL—a novel framework that for the first time formalizes CM and CI as mutually transposed dual views, implemented via a symmetric twin-network architecture. C3RL jointly optimizes an adaptive weighted contrastive loss and prediction loss to co-learn cross-variable dependencies and variable-specific dynamics. Crucially, it is model-agnostic: it integrates seamlessly with diverse backbone forecasting models without architectural modification. Extensive experiments across seven benchmark datasets demonstrate substantial improvements—76.3% and 81.4% average gains in forecasting accuracy for the CM and CI pathways, respectively—thereby significantly enhancing generalizability, interpretability, and versatility.
📝 Abstract
Multivariate time series forecasting has drawn increasing attention due to its practical importance. Existing approaches typically adopt either channel-mixing (CM) or channel-independence (CI) strategies. CM strategy can capture inter-variable dependencies but fails to discern variable-specific temporal patterns. CI strategy improves this aspect but fails to fully exploit cross-variable dependencies like CM. Hybrid strategies based on feature fusion offer limited generalization and interpretability. To address these issues, we propose C3RL, a novel representation learning framework that jointly models both CM and CI strategies. Motivated by contrastive learning in computer vision, C3RL treats the inputs of the two strategies as transposed views and builds a siamese network architecture: one strategy serves as the backbone, while the other complements it. By jointly optimizing contrastive and prediction losses with adaptive weighting, C3RL balances representation and forecasting performance. Extensive experiments on seven models show that C3RL boosts the best-case performance rate to 81.4% for models based on CI strategy and to 76.3% for models based on CM strategy, demonstrating strong generalization and effectiveness. The code will be available once the paper is accepted.