🤖 AI Summary
To address the severe degradation of channel estimation performance in OFDM systems under high-mobility scenarios—characterized by rapid fading channels and low signal-to-noise ratio (SNR)—this paper proposes an Adaptive Enhanced Transformer (AET) model. The AET is the first to explicitly model dynamic channel statistics—including SNR, delay spread, and Doppler shift—as learnable nonlinear priors embedded within the Transformer architecture. It further introduces a residual local-global feature fusion mechanism that synergistically leverages CNNs’ strong local feature extraction capability and Transformers’ long-range spatiotemporal modeling capacity across time-frequency domains. Experimental results demonstrate that, under challenging conditions of Doppler shifts from 200–1000 Hz, SNR ranging from 0–25 dB, and delay spreads of 50–300 ns, the proposed method achieves up to 6 dB lower mean squared error (MSE) compared to state-of-the-art channel estimators, significantly enhancing robustness and generalization in highly dynamic wireless environments.
📝 Abstract
Deep learning models for channel estimation in Orthogonal Frequency Division Multiplexing (OFDM) systems often suffer from performance degradation under fast-fading channels and low-SNR scenarios. To address these limitations, we introduce the Adaptive Fortified Transformer (AdaFortiTran), a novel model specifically designed to enhance channel estimation in challenging environments. Our approach employs convolutional layers that exploit locality bias to capture strong correlations between neighboring channel elements, combined with a transformer encoder that applies the global Attention mechanism to channel patches. This approach effectively models both long-range dependencies and spectro-temporal interactions within single OFDM frames. We further augment the model's adaptability by integrating nonlinear representations of available channel statistics SNR, delay spread, and Doppler shift as priors. A residual connection is employed to merge global features from the transformer with local features from early convolutional processing, followed by final convolutional layers to refine the hierarchical channel representation. Despite its compact architecture, AdaFortiTran achieves up to 6 dB reduction in mean squared error (MSE) compared to state-of-the-art models. Tested across a wide range of Doppler shifts (200-1000 Hz), SNRs (0 to 25 dB), and delay spreads (50-300 ns), it demonstrates superior robustness in high-mobility environments.