🤖 AI Summary
This study addresses the X-haul bandwidth bottleneck in O-RAN caused by frequent CSI exchange in Cell-Free Massive MIMO systems, where existing approaches struggle to balance compression efficiency and prediction accuracy. To tackle this challenge, the authors propose LITE, a lightweight channel gain prediction framework tailored to O-RAN constraints. It employs a one-dimensional convolutional autoencoder at the O-DU for CSI compression and an SE-enhanced asymmetric bidirectional LSTM at the Near-RT RIC for short-term prediction, trained jointly with compressed sensing. The proposed method achieves a 50% CSI compression ratio and reduces model complexity by 83.39%, while improving prediction accuracy by 5% over baseline methods—only 6% lower than an uncompressed BiLSTM. Furthermore, it attains an inference throughput of 147k queries per second, yielding a 4.6× speedup.
📝 Abstract
Cell-Free Massive Multiple-Input Multiple-Output (CF-MaMIMO) in Open Radio Access Network (O-RAN) promises high spectral efficiency but is limited by frequent Channel State Information (CSI) exchanges, which strain fronthaul/midhaul/backhaul (X-haul) bandwidth and exceed the capabilities of existing approaches relying on uncompressed CSI or heavy predictors. To overcome these constraints, we propose LITE, a lightweight pipeline combining a 1-D convolutional Autoencoder (AE) at the O-RAN Distributed Unit (O-DU) with a Squeeze-and-Excitation (SE)-enhanced Bidirectional Long Short-Term Memory (BiLSTM) predictor at the Near-Real-Time RAN Intelligent Controller (Near-RT-RIC), enabling short-horizon trajectory-unaware forecasting under strict transport and processing budgets. LITE applies 50% CSI compression and an asymmetric SE-BiLSTM, reducing model complexity by 83.39% while improving accuracy by 5% relative to a baseline BiLSTM. With compression-aware training, the Lightweight Intelligent Trajectory Estimator (LITE) incurs only 6% accuracy loss versus the BiLSTM baseline, outperforming independent and end-to-end strategies. A TensorRT-optimized implementation achieves 147k Queries per Second (QPS), a 4.6x throughput gain. These results demonstrate that LITE delivers X-haul-efficient, low-latency, and deployment-ready channel-gain prediction compatible with O-RAN splits.