🤖 AI Summary
This work addresses the challenges in offline multi-task multi-agent reinforcement learning, where varying numbers of agents across tasks and partial observability hinder the modeling of long-term temporal dependencies and effective cross-agent coordination. To overcome these issues, the authors propose STAIRS-Former, a Transformer-based architecture that incorporates a spatiotemporal hierarchical attention mechanism to focus on salient tokens, an interleaved recurrent structure to capture long-range interaction histories, and a token dropout strategy to enhance robustness to dynamic agent counts. Evaluated on multi-task benchmarks including SMAC, SMAC-v2, MPE, and MaMuJoCo, the method significantly outperforms existing approaches, achieving new state-of-the-art results and demonstrating strong generalization to unseen scenarios.
📝 Abstract
Offline multi-agent reinforcement learning (MARL) with multi-task datasets is challenging due to varying numbers of agents across tasks and the need to generalize to unseen scenarios. Prior works employ transformers with observation tokenization and hierarchical skill learning to address these issues. However, they underutilize the transformer attention mechanism for inter-agent coordination and rely on a single history token, which limits their ability to capture long-horizon temporal dependencies in partially observable MARL settings. In this paper, we propose STAIRS-Former, a transformer architecture augmented with spatial and temporal hierarchies that enables effective attention over critical tokens while capturing long interaction histories. We further introduce token dropout to enhance robustness and generalization across varying agent populations. Extensive experiments on diverse multi-agent benchmarks, including SMAC, SMAC-v2, MPE, and MaMuJoCo, with multi-task datasets demonstrate that STAIRS-Former consistently outperforms prior methods and achieves new state-of-the-art performance.