🤖 AI Summary
To address the challenges of unlabeled and heterogeneous demonstration data—comprising both expert and suboptimal trajectories—in multi-agent offline imitation learning, this paper proposes an end-to-end annotation-and-learning framework. First, it constructs a progressive trajectory quality annotation pipeline leveraging large language models and preference-based reinforcement learning. Second, it trains a robust multi-agent policy using the generated annotations. Methodologically, we extend the DICE (Distributionally Robust Imitation via Convex Optimization) framework to multi-agent settings for the first time, introducing a novel value decomposition mechanism and a hybrid network architecture that jointly ensures global consistency, local optimizability, and convexity of the optimization objective. Empirical evaluation on standard multi-agent RL benchmarks demonstrates significant improvements over state-of-the-art methods, especially under expert-data scarcity, validating the framework’s efficacy in exploiting heterogeneous unlabeled demonstrations.
📝 Abstract
We study offline imitation learning (IL) in cooperative multi-agent settings, where demonstrations have unlabeled mixed quality - containing both expert and suboptimal trajectories. Our proposed solution is structured in two stages: trajectory labeling and multi-agent imitation learning, designed jointly to enable effective learning from heterogeneous, unlabeled data. In the first stage, we combine advances in large language models and preference-based reinforcement learning to construct a progressive labeling pipeline that distinguishes expert-quality trajectories. In the second stage, we introduce MisoDICE, a novel multi-agent IL algorithm that leverages these labels to learn robust policies while addressing the computational complexity of large joint state-action spaces. By extending the popular single-agent DICE framework to multi-agent settings with a new value decomposition and mixing architecture, our method yields a convex policy optimization objective and ensures consistency between global and local policies. We evaluate MisoDICE on multiple standard multi-agent RL benchmarks and demonstrate superior performance, especially when expert data is scarce.