Geminet: Learning the Duality-based Iterative Process for Lightweight Traffic Engineering in Changing Topologies

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing machine learning–based traffic engineering (TE) approaches struggle with dynamic network topologies and suffer from high computational/memory overhead and poor scalability. This paper proposes Geminet, a lightweight and scalable TE framework that—uniquely—decouples the gradient descent iteration from edge-level dual variable optimization, thereby eliminating reliance on precomputed paths and static topologies. Geminet employs a neural network to learn decentralized, topology-agnostic update policies for edge-level dual variables, enabling adaptive routing optimization under arbitrary topology changes. Its model size is only 0.04%–7% of state-of-the-art (SOTA) methods; memory consumption remains below 10 GiB; convergence accelerates by 5.45×; and performance matches that of HARP without degradation. These advances significantly bridge the gap between ML-based TE research and practical deployment.

Technology Category

Application Category

📝 Abstract
Recently, researchers have explored ML-based Traffic Engineering (TE), leveraging neural networks to solve TE problems traditionally addressed by optimization. However, existing ML-based TE schemes remain impractical: they either fail to handle topology changes or suffer from poor scalability due to excessive computational and memory overhead. To overcome these limitations, we propose Geminet, a lightweight and scalable ML-based TE framework that can handle changing topologies. Geminet is built upon two key insights: (i) a methodology that decouples neural networks from topology by learning an iterative gradient-descent-based adjustment process, as the update rule of gradient descent is topology-agnostic, relying only on a few gradient-related quantities; (ii) shifting optimization from path-level routing weights to edge-level dual variables, reducing memory consumption by leveraging the fact that edges are far fewer than paths. Evaluations on WAN and data center datasets show that Geminet significantly improves scalability. Its neural network size is only 0.04% to 7% of existing schemes, while handling topology variations as effectively as HARP, a state-of-the-art ML-based TE approach, without performance degradation. When trained on large-scale topologies, Geminet consumes under 10 GiB of memory, more than eight times less than the 80-plus GiB required by HARP, while achieving 5.45 times faster convergence speed, demonstrating its potential for large-scale deployment.
Problem

Research questions and friction points this paper is trying to address.

Handling topology changes in ML-based Traffic Engineering
Reducing computational and memory overhead in TE
Achieving scalability in large-scale network deployments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples neural networks from changing topologies
Uses edge-level dual variables for optimization
Reduces memory and computational overhead significantly
🔎 Similar Papers
No similar papers found.