Multi-Agent Guided Policy Optimization

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing CTDE methods fail to fully exploit the advantages of centralized training and lack theoretical guarantees. This paper proposes MAGPO, a novel CTDE framework that enables efficient cooperative exploration and decentralized execution via centralized guidance and decentralized policy alignment. Its core contributions are: (1) an autoregressive joint policy modeling mechanism that supports scalable multi-agent cooperative exploration; and (2) a policy alignment constraint coupled with monotonic policy gradient optimization, establishing—for the first time—theoretical guarantees of monotonic policy improvement in CTDE. Evaluated across six environments and 43 tasks, MAGPO consistently outperforms mainstream CTDE baselines and achieves performance on par with or superior to fully centralized approaches, demonstrating its effectiveness, generalizability, and practicality.

Technology Category

Application Category

📝 Abstract
Due to practical constraints such as partial observability and limited communication, Centralized Training with Decentralized Execution (CTDE) has become the dominant paradigm in cooperative Multi-Agent Reinforcement Learning (MARL). However, existing CTDE methods often underutilize centralized training or lack theoretical guarantees. We propose Multi-Agent Guided Policy Optimization (MAGPO), a novel framework that better leverages centralized training by integrating centralized guidance with decentralized execution. MAGPO uses an auto-regressive joint policy for scalable, coordinated exploration and explicitly aligns it with decentralized policies to ensure deployability under partial observability. We provide theoretical guarantees of monotonic policy improvement and empirically evaluate MAGPO on 43 tasks across 6 diverse environments. Results show that MAGPO consistently outperforms strong CTDE baselines and matches or surpasses fully centralized approaches, offering a principled and practical solution for decentralized multi-agent learning. Our code and experimental data can be found in https://github.com/liyheng/MAGPO.
Problem

Research questions and friction points this paper is trying to address.

Improves centralized training in multi-agent reinforcement learning
Ensures deployability under partial observability constraints
Provides theoretical guarantees for policy improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates centralized guidance with decentralized execution
Uses auto-regressive joint policy for exploration
Aligns joint policy with decentralized policies
🔎 Similar Papers
No similar papers found.