🤖 AI Summary
To address the high computational complexity and slow training convergence of OFDM deep neural receivers (NeuralRx), this paper proposes a low-complexity residual network architecture. The method introduces a lightweight channel-splitting and shuffling module featuring small convolutional kernels, controllable dilation rates, and uniform channel dimensions, while eliminating element-wise addition in residual connections—thereby significantly reducing FLOPs and memory access overhead. Built upon the ResNet framework, the model incorporates GELU activation and a structured channel-splitting mechanism. Experimental results on standard OFDM tasks demonstrate that the proposed approach maintains or even improves decoding accuracy while reducing FLOPs by 37.2% and accelerating training convergence by 2.1× compared to baseline NeuralRx models, validating its efficiency and practicality for real-time OFDM signal processing.
📝 Abstract
Deep neural receivers (NeuralRxs) for Orthogonal Frequency Division Multiplexing (OFDM) signals are proposed for enhanced decoding performance compared to their signal-processing based counterparts. However, the existing architectures ignore the required number of epochs for training convergence and floating-point operations (FLOPs), which increase significantly with improving performance. To tackle these challenges, we propose a new residual network (ResNet) block design for OFDM NeuralRx. Specifically, we leverage small kernel sizes and dilation rates to lower the number of FLOPs (NFLOPs) and uniform channel sizes to reduce the memory access cost (MAC). The ResNet block is designed with novel channel split and shuffle blocks, element-wise additions are removed, with Gaussian error linear unit (GELU) activations. Extensive simulations show that our proposed NeuralRx reduces NFLOPs and improves training convergence while improving the decoding accuracy.