Oryx: a Performant and Scalable Algorithm for Many-Agent Coordination in Offline MARL

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of long-range coordination among large-scale agents, temporal inconsistency, and weak coordination robustness in offline multi-agent reinforcement learning (MARL), this paper proposes an offline autoregressive policy update framework based on a preservation-oriented architecture and Implicit Constraint Q-learning (ICQ). It pioneers the integration of autoregressive modeling into offline MARL, synergistically combining Sable sequence modeling with ICQ’s implicit constraint mechanism to enable multi-step coordinated optimization for hundreds of agents over ultra-long trajectories. Behavior cloning and constrained counterfactual reasoning are jointly incorporated to enhance policy credibility and fidelity. Evaluated across 65 benchmark tasks, the method achieves state-of-the-art (SOTA) performance in over 80% of them—significantly outperforming existing approaches. Extensive experiments on SMAC, RWARE, Multi-Agent MuJoCo, and a newly constructed high-dimensional cooperative dataset demonstrate strong generalization capability and scalability.

Technology Category

Application Category

📝 Abstract
A key challenge in offline multi-agent reinforcement learning (MARL) is achieving effective many-agent multi-step coordination in complex environments. In this work, we propose Oryx, a novel algorithm for offline cooperative MARL to directly address this challenge. Oryx adapts the recently proposed retention-based architecture Sable and combines it with a sequential form of implicit constraint Q-learning (ICQ), to develop a novel offline auto-regressive policy update scheme. This allows Oryx to solve complex coordination challenges while maintaining temporal coherence over lengthy trajectories. We evaluate Oryx across a diverse set of benchmarks from prior works (SMAC, RWARE, and Multi-Agent MuJoCo) covering tasks of both discrete and continuous control, varying in scale and difficulty. Oryx achieves state-of-the-art performance on more than 80% of the 65 tested datasets, outperforming prior offline MARL methods and demonstrating robust generalisation across domains with many agents and long horizons. Finally, we introduce new datasets to push the limits of many-agent coordination in offline MARL, and demonstrate Oryx's superior ability to scale effectively in such settings. We will make all of our datasets, experimental data, and code available upon publication.
Problem

Research questions and friction points this paper is trying to address.

Achieving effective many-agent coordination in offline MARL
Maintaining temporal coherence over lengthy trajectories
Scaling effectively in complex many-agent environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retention-based architecture Sable adaptation
Sequential implicit constraint Q-learning integration
Offline auto-regressive policy update scheme
🔎 Similar Papers
No similar papers found.