STAIRS-Former: Spatio-Temporal Attention with Interleaved Recursive Structure Transformer for Offline Multi-task Multi-agent Reinforcement Learning

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in offline multi-task multi-agent reinforcement learning, where varying numbers of agents across tasks and partial observability hinder the modeling of long-term temporal dependencies and effective cross-agent coordination. To overcome these issues, the authors propose STAIRS-Former, a Transformer-based architecture that incorporates a spatiotemporal hierarchical attention mechanism to focus on salient tokens, an interleaved recurrent structure to capture long-range interaction histories, and a token dropout strategy to enhance robustness to dynamic agent counts. Evaluated on multi-task benchmarks including SMAC, SMAC-v2, MPE, and MaMuJoCo, the method significantly outperforms existing approaches, achieving new state-of-the-art results and demonstrating strong generalization to unseen scenarios.

Technology Category

Application Category

📝 Abstract
Offline multi-agent reinforcement learning (MARL) with multi-task datasets is challenging due to varying numbers of agents across tasks and the need to generalize to unseen scenarios. Prior works employ transformers with observation tokenization and hierarchical skill learning to address these issues. However, they underutilize the transformer attention mechanism for inter-agent coordination and rely on a single history token, which limits their ability to capture long-horizon temporal dependencies in partially observable MARL settings. In this paper, we propose STAIRS-Former, a transformer architecture augmented with spatial and temporal hierarchies that enables effective attention over critical tokens while capturing long interaction histories. We further introduce token dropout to enhance robustness and generalization across varying agent populations. Extensive experiments on diverse multi-agent benchmarks, including SMAC, SMAC-v2, MPE, and MaMuJoCo, with multi-task datasets demonstrate that STAIRS-Former consistently outperforms prior methods and achieves new state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

offline multi-agent reinforcement learning
multi-task datasets
inter-agent coordination
long-horizon temporal dependencies
partially observable environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatio-Temporal Attention
Interleaved Recursive Structure
Offline Multi-agent Reinforcement Learning
Token Dropout
Multi-task Generalization
🔎 Similar Papers
No similar papers found.
J
Jiwon Jeon
School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
M
Myungsik Cho
School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
Youngchul Sung
Youngchul Sung
Professor, Electrical Engineering, KAIST
Signal Processing for CommunicationsStatistical Signal ProcessingReinforcement Learning