A Low-Complexity Plug-and-Play Deep Learning Model for Generalizable Massive MIMO Precoding

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the practical challenges of downlink precoding in massive MIMO systems, including high computational complexity, sensitivity to channel estimation errors, and the limited generalization of existing deep learning approaches that often require scenario-specific training. To overcome these limitations, the authors propose PaPP, a plug-and-play precoder that integrates teacher–student knowledge distillation, self-supervised learning, meta-learning for domain generalization, and transmit power-aware input normalization. PaPP achieves robust, energy-efficient performance across diverse scenarios and power levels while being compatible with both fully digital and hybrid beamforming architectures. Evaluated on three unseen real-world scenarios, PaPP requires only a small amount of unlabeled data for fine-tuning and significantly outperforms conventional and deep learning baselines, reducing computational energy consumption by over 21× without compromising robustness or spectral efficiency.

Technology Category

Application Category

📝 Abstract
Massive multiple-input multiple-output (mMIMO) downlink precoding offers high spectral efficiency but remains challenging to deploy in practice because near-optimal algorithms such as the weighted minimum mean squared error (WMMSE) are computationally expensive, and sensitive to SNR and channel-estimation quality, while existing deep learning (DL)-based solutions often lack robustness and require retraining for each deployment site. This paper proposes a plug-and-play precoder (PaPP), a DL framework with a backbone that can be trained for either fully digital (FDP) or hybrid beamforming (HBF) precoding and reused across sites, transmit-power levels, and with varying amounts of channel estimation error, avoiding the need to train a new model from scratch at each deployment. PaPP combines a high-capacity teacher and a compact student with a self-supervised loss that balances teacher imitation and normalized sum-rate, trained using meta-learning domain-generalization and transmit-power-aware input normalization. Numerical results on ray-tracing data from three unseen sites show that the PaPP FDP and HBF models both outperform conventional and deep learning baselines, after fine-tuning with a small set of local unlabeled samples. Across both architectures, PaPP achieves more than 21$\times$ reduction in modeled computation energy and maintains good performance under channel-estimation errors, making it a practical solution for energy-efficient mMIMO precoding.
Problem

Research questions and friction points this paper is trying to address.

massive MIMO
precoding
deep learning
generalization
computational complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

plug-and-play precoding
domain generalization
meta-learning
teacher-student architecture
energy-efficient mMIMO
🔎 Similar Papers
No similar papers found.
A
Ali Hasanzadeh Karkan
Department of Electrical Engineering, Polytechnique Montréal, Montréal, QC H3C 3A7, Canada
A
Ahmed Ibrahim
Ericsson Canada’s R&D, Kanata, ON K2K 2V6, Canada
Jean-François Frigon
Jean-François Frigon
Polytechnique Montréal
Wireless communications
F
François Leduc-Primeau
Department of Electrical Engineering, Polytechnique Montréal, Montréal, QC H3C 3A7, Canada