π€ AI Summary
In multivariate time series forecasting (MTSF), channel-independent (CI) strategies neglect inter-variable dependencies, while channel-dependent (CD) approaches often introduce noise and suffer from computational inefficiency. To address this trade-off, we propose Adapformerβa novel Transformer-based framework that introduces adaptive channel management for dynamic synergy between CI and CD. Its architecture features a two-stage encoder-decoder design. Key contributions include: (1) the Adaptive Channel Enhancer (ACE), which dynamically identifies and strengthens critical cross-channel dependencies; and (2) the Covariate-Focused Decoder (ACF), which suppresses irrelevant covariate information to enhance prediction robustness. Extensive experiments on multiple benchmark datasets demonstrate that Adapformer consistently outperforms state-of-the-art methods, achieving superior accuracy and significantly improved inference efficiency.
π Abstract
In multivariate time series forecasting (MTSF), accurately modeling the intricate dependencies among multiple variables remains a significant challenge due to the inherent limitations of traditional approaches. Most existing models adopt either extbf{channel-independent} (CI) or extbf{channel-dependent} (CD) strategies, each presenting distinct drawbacks. CI methods fail to leverage the potential insights from inter-channel interactions, resulting in models that may not fully exploit the underlying statistical dependencies present in the data. Conversely, CD approaches often incorporate too much extraneous information, risking model overfitting and predictive inefficiency. To address these issues, we introduce the Adaptive Forecasting Transformer ( extbf{Adapformer}), an advanced Transformer-based framework that merges the benefits of CI and CD methodologies through effective channel management. The core of Adapformer lies in its dual-stage encoder-decoder architecture, which includes the extbf{A}daptive extbf{C}hannel extbf{E}nhancer ( extbf{ACE}) for enriching embedding processes and the extbf{A}daptive extbf{C}hannel extbf{F}orecaster ( extbf{ACF}) for refining the predictions. ACE enhances token representations by selectively incorporating essential dependencies, while ACF streamlines the decoding process by focusing on the most relevant covariates, substantially reducing noise and redundancy. Our rigorous testing on diverse datasets shows that Adapformer achieves superior performance over existing models, enhancing both predictive accuracy and computational efficiency, thus making it state-of-the-art in MTSF.