🤖 AI Summary
To address the high energy consumption, excessive computational complexity, and stringent satellite power constraints of massive MIMO precoding in 6G low-Earth-orbit (LEO) satellite communications, this paper proposes an end-to-end, low-complexity precoding framework based on graph neural networks (GNNs). Methodologically, it innovatively unrolls the Dinkelbach algorithm and the weighted minimum mean square error (WMMSE) method into trainable deep neural networks and incorporates Taylor-series approximations for matrix inversion to jointly enhance model interpretability and energy efficiency. The resulting approach reduces computational complexity by approximately 70% in floating-point operations compared to conventional iterative algorithms, while significantly improving energy efficiency and channel robustness. It consistently outperforms state-of-the-art (SOTA) methods under challenging scenarios—including multi-user deployments, dynamic network topologies, and limited feedback—demonstrating strong generalization and practical viability for resource-constrained LEO satellite systems.
📝 Abstract
Low Earth Orbit (LEO) satellite communication is a critical component in the development of sixth generation (6G) networks. The integration of massive multiple-input multiple-output (MIMO) technology is being actively explored to enhance the performance of LEO satellite communications. However, the limited power of LEO satellites poses a significant challenge in improving communication energy efficiency (EE) under constrained power conditions. Artificial intelligence (AI) methods are increasingly recognized as promising solutions for optimizing energy consumption while enhancing system performance, thus enabling more efficient and sustainable communications. This paper proposes approaches to address the challenges associated with precoding in massive MIMO LEO satellite communications. First, we introduce an end-to-end graph neural network (GNN) framework that effectively reduces the computational complexity of traditional precoding methods. Next, we introduce a deep unfolding of the Dinkelbach algorithm and the weighted minimum mean square error (WMMSE) approach to achieve enhanced EE, transforming iterative optimization processes into a structured neural network, thereby improving convergence speed and computational efficiency. Furthermore, we incorporate the Taylor expansion method to approximate matrix inversion within the GNN, enhancing both the interpretability and performance of the proposed method. Numerical experiments demonstrate the validity of our proposed method in terms of complexity and robustness, achieving significant improvements over state-of-the-art methods.