π€ AI Summary
Traditional queuing models and packet-level simulations suffer from low efficiency, strong assumptions, and high computational overhead in resource optimization and planning for large-scale complex networks. To address these limitations, this paper proposes a dynamic, collaborative graph neural network (GNN) training and inference framework enabling context-aware energy-efficiency modeling and real-time prediction. Our approach innovatively integrates a quantum approximate optimization (QAO)-based adaptive GNN orchestration mechanism with tripartite graph modeling and constrained graph partitioning to jointly optimize energy efficiency and application requirements. Experimental results demonstrate that, compared to optimal baselines, the proposed method reduces energy consumption by β₯50%, decreases configuration oscillation rate by 60%, while maintaining high modeling accuracy and achieving millisecond-scale inference latency.
π Abstract
Efficient network modeling is essential for resource optimization and network planning in next-generation large-scale complex networks. Traditional approaches, such as queuing theory-based modeling and packet-based simulators, can be inefficient due to the assumption made and the computational expense, respectively. To address these challenges, we propose an innovative energy-efficient dynamic orchestration of Graph Neural Networks (GNN) based model training and inference framework for context-aware network modeling and predictions. We have developed a low-complexity solution framework, QAG, that is a Quantum approximation optimization (QAO) algorithm for Adaptive orchestration of GNN-based network modeling. We leverage the tripartite graph model to represent a multi-application system with many compute nodes. Thereafter, we apply the constrained graph-cutting using QAO to find the feasible energy-efficient configurations of the GNN-based model and deploying them on the available compute nodes to meet the network modeling application requirements. The proposed QAG scheme closely matches the optimum and offers atleast a 50% energy saving while meeting the application requirements with 60% lower churn-rate.