🤖 AI Summary
This work addresses the limitations of existing linear models in time series forecasting, which struggle to effectively capture complex intra-channel temporal dependencies and inter-channel nonlinear correlations, particularly in modeling high-frequency components. To overcome these challenges, the authors propose ACFormer, a novel architecture that introduces the concept of “individual receptive fields” and establishes the equivalence between convolution and channel attention. ACFormer synergistically combines the efficiency of linear projections with the nonlinear representational power of convolution through a shared compression module, a gated attention mechanism, and independent block-wise expansion layers to jointly model multidimensional temporal patterns. Extensive experiments demonstrate that ACFormer significantly outperforms state-of-the-art methods across multiple benchmark datasets, achieving particularly strong performance on highly nonlinear and high-frequency time series forecasting tasks.
📝 Abstract
Time series forecasting (TSF) faces challenges in modeling complex intra-channel temporal dependencies and inter-channel correlations. Although recent research has highlighted the efficiency of linear architectures in capturing global trends, these models often struggle with non-linear signals. To address this gap, we conducted a systematic receptive field analysis of convolutional neural network (CNN) TSF models. We introduce the"individual receptive field"to uncover granular structural dependencies, revealing that convolutional layers act as feature extractors that mirror channel-wise attention while exhibiting superior robustness to non-linear fluctuations. Based on these insights, we propose ACFormer, an architecture designed to reconcile the efficiency of linear projections with the non-linear feature-extraction power of convolutions. ACFormer captures fine-grained information through a shared compression module, preserves temporal locality via gated attention, and reconstructs variable-specific temporal patterns using an independent patch expansion layer. Extensive experiments on multiple benchmark datasets demonstrate that ACFormer consistently achieves state-of-the-art performance, effectively mitigating the inherent drawbacks of linear models in capturing high-frequency components.