Energy-Efficient Dynamic Training and Inference for GNN-Based Network Modeling

πŸ“… 2025-03-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Traditional queuing models and packet-level simulations suffer from low efficiency, strong assumptions, and high computational overhead in resource optimization and planning for large-scale complex networks. To address these limitations, this paper proposes a dynamic, collaborative graph neural network (GNN) training and inference framework enabling context-aware energy-efficiency modeling and real-time prediction. Our approach innovatively integrates a quantum approximate optimization (QAO)-based adaptive GNN orchestration mechanism with tripartite graph modeling and constrained graph partitioning to jointly optimize energy efficiency and application requirements. Experimental results demonstrate that, compared to optimal baselines, the proposed method reduces energy consumption by β‰₯50%, decreases configuration oscillation rate by 60%, while maintaining high modeling accuracy and achieving millisecond-scale inference latency.

Technology Category

Application Category

πŸ“ Abstract
Efficient network modeling is essential for resource optimization and network planning in next-generation large-scale complex networks. Traditional approaches, such as queuing theory-based modeling and packet-based simulators, can be inefficient due to the assumption made and the computational expense, respectively. To address these challenges, we propose an innovative energy-efficient dynamic orchestration of Graph Neural Networks (GNN) based model training and inference framework for context-aware network modeling and predictions. We have developed a low-complexity solution framework, QAG, that is a Quantum approximation optimization (QAO) algorithm for Adaptive orchestration of GNN-based network modeling. We leverage the tripartite graph model to represent a multi-application system with many compute nodes. Thereafter, we apply the constrained graph-cutting using QAO to find the feasible energy-efficient configurations of the GNN-based model and deploying them on the available compute nodes to meet the network modeling application requirements. The proposed QAG scheme closely matches the optimum and offers atleast a 50% energy saving while meeting the application requirements with 60% lower churn-rate.
Problem

Research questions and friction points this paper is trying to address.

Energy-efficient dynamic training for GNN-based network modeling
Low-complexity quantum-optimized GNN orchestration framework
Reducing energy use while meeting network application demands
Innovation

Methods, ideas, or system contributions that make the work stand out.

Energy-efficient dynamic GNN training and inference
Quantum approximation optimization for adaptive orchestration
Constrained graph-cutting for energy-efficient configurations
πŸ”Ž Similar Papers
No similar papers found.