🤖 AI Summary
This work addresses the practical challenges of downlink precoding in massive MIMO systems, including high computational complexity, sensitivity to channel estimation errors, and the limited generalization of existing deep learning approaches that often require scenario-specific training. To overcome these limitations, the authors propose PaPP, a plug-and-play precoder that integrates teacher–student knowledge distillation, self-supervised learning, meta-learning for domain generalization, and transmit power-aware input normalization. PaPP achieves robust, energy-efficient performance across diverse scenarios and power levels while being compatible with both fully digital and hybrid beamforming architectures. Evaluated on three unseen real-world scenarios, PaPP requires only a small amount of unlabeled data for fine-tuning and significantly outperforms conventional and deep learning baselines, reducing computational energy consumption by over 21× without compromising robustness or spectral efficiency.
📝 Abstract
Massive multiple-input multiple-output (mMIMO) downlink precoding offers high spectral efficiency but remains challenging to deploy in practice because near-optimal algorithms such as the weighted minimum mean squared error (WMMSE) are computationally expensive, and sensitive to SNR and channel-estimation quality, while existing deep learning (DL)-based solutions often lack robustness and require retraining for each deployment site. This paper proposes a plug-and-play precoder (PaPP), a DL framework with a backbone that can be trained for either fully digital (FDP) or hybrid beamforming (HBF) precoding and reused across sites, transmit-power levels, and with varying amounts of channel estimation error, avoiding the need to train a new model from scratch at each deployment. PaPP combines a high-capacity teacher and a compact student with a self-supervised loss that balances teacher imitation and normalized sum-rate, trained using meta-learning domain-generalization and transmit-power-aware input normalization. Numerical results on ray-tracing data from three unseen sites show that the PaPP FDP and HBF models both outperform conventional and deep learning baselines, after fine-tuning with a small set of local unlabeled samples. Across both architectures, PaPP achieves more than 21$\times$ reduction in modeled computation energy and maintains good performance under channel-estimation errors, making it a practical solution for energy-efficient mMIMO precoding.