AdaFortiTran: An Adaptive Transformer Model for Robust OFDM Channel Estimation

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe degradation of channel estimation performance in OFDM systems under high-mobility scenarios—characterized by rapid fading channels and low signal-to-noise ratio (SNR)—this paper proposes an Adaptive Enhanced Transformer (AET) model. The AET is the first to explicitly model dynamic channel statistics—including SNR, delay spread, and Doppler shift—as learnable nonlinear priors embedded within the Transformer architecture. It further introduces a residual local-global feature fusion mechanism that synergistically leverages CNNs’ strong local feature extraction capability and Transformers’ long-range spatiotemporal modeling capacity across time-frequency domains. Experimental results demonstrate that, under challenging conditions of Doppler shifts from 200–1000 Hz, SNR ranging from 0–25 dB, and delay spreads of 50–300 ns, the proposed method achieves up to 6 dB lower mean squared error (MSE) compared to state-of-the-art channel estimators, significantly enhancing robustness and generalization in highly dynamic wireless environments.

Technology Category

Application Category

📝 Abstract
Deep learning models for channel estimation in Orthogonal Frequency Division Multiplexing (OFDM) systems often suffer from performance degradation under fast-fading channels and low-SNR scenarios. To address these limitations, we introduce the Adaptive Fortified Transformer (AdaFortiTran), a novel model specifically designed to enhance channel estimation in challenging environments. Our approach employs convolutional layers that exploit locality bias to capture strong correlations between neighboring channel elements, combined with a transformer encoder that applies the global Attention mechanism to channel patches. This approach effectively models both long-range dependencies and spectro-temporal interactions within single OFDM frames. We further augment the model's adaptability by integrating nonlinear representations of available channel statistics SNR, delay spread, and Doppler shift as priors. A residual connection is employed to merge global features from the transformer with local features from early convolutional processing, followed by final convolutional layers to refine the hierarchical channel representation. Despite its compact architecture, AdaFortiTran achieves up to 6 dB reduction in mean squared error (MSE) compared to state-of-the-art models. Tested across a wide range of Doppler shifts (200-1000 Hz), SNRs (0 to 25 dB), and delay spreads (50-300 ns), it demonstrates superior robustness in high-mobility environments.
Problem

Research questions and friction points this paper is trying to address.

Enhance OFDM channel estimation in fast-fading and low-SNR conditions
Model long-range dependencies and spectro-temporal interactions in OFDM
Improve robustness in high-mobility environments with varying channel statistics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Transformer combines CNN and Attention
Nonlinear channel statistics as model priors
Residual connection merges global and local features
🔎 Similar Papers
No similar papers found.
B
Berkay Guler
Center for Pervasive Communications and Computing, University of California, Irvine
Hamid Jafarkhani
Hamid Jafarkhani
Chancellor's Professor, University of California, Irvine