A Novel Switch-Type Policy Network for Resource Allocation Problems: Technical Report

📅 2025-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low sample efficiency and poor cross-network generalization of deep reinforcement learning (DRL) in queueing system control, this paper proposes the Switch-based Policy Network (STN). STN is the first to embed a structured switching mechanism into the policy network, explicitly encoding structural priors from classical non-learning policies—thereby enabling domain-knowledge-driven parameter sharing and state-consistent decision-making. Built upon the PPO framework, STN integrates rule-inspired architectural inductive biases with end-to-end gradient optimization, achieving both interpretability and lightweight design. Experiments show that STN improves training sample efficiency by over 40%; on unseen queueing networks, it outperforms MLP-based policies by up to 68% in policy performance, while maintaining state-of-the-art performance on known environments. The core contribution lies in the structural-prior-guided switch design, which significantly mitigates overfitting and generalization failure of DRL in queueing control.

Technology Category

Application Category

📝 Abstract
Deep Reinforcement Learning (DRL) has become a powerful tool for developing control policies in queueing networks, but the common use of Multi-layer Perceptron (MLP) neural networks in these applications has significant drawbacks. MLP architectures, while versatile, often suffer from poor sample efficiency and a tendency to overfit training environments, leading to suboptimal performance on new, unseen networks. In response to these issues, we introduce a switch-type neural network (STN) architecture designed to improve the efficiency and generalization of DRL policies in queueing networks. The STN leverages structural patterns from traditional non-learning policies, ensuring consistent action choices across similar states. This design not only streamlines the learning process but also fosters better generalization by reducing the tendency to overfit. Our works presents three key contributions: first, the development of the STN as a more effective alternative to MLPs; second, empirical evidence showing that STNs achieve superior sample efficiency in various training scenarios; and third, experimental results demonstrating that STNs match MLP performance in familiar environments and significantly outperform them in new settings. By embedding domain-specific knowledge, the STN enhances the Proximal Policy Optimization (PPO) algorithm's effectiveness without compromising performance, suggesting its suitability for a wide range of queueing network control problems.
Problem

Research questions and friction points this paper is trying to address.

Deep Reinforcement Learning
Queueing Systems
Generalization Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Switching-Type Neural Network
Deep Reinforcement Learning
Resource Allocation
🔎 Similar Papers
No similar papers found.
J
Jerrod Wigmore
Massachusetts Institute of Technology
B
B. Shrader
MIT Lincoln Laboratory
Eytan Modiano
Eytan Modiano
MIT
Communication Networksperformance evaluation