🤖 AI Summary
Existing population-based training methods for zero-shot coordination (ZSC) suffer from prohibitive computational costs and poor scalability with population size. To address this, we propose ScaPT, a scalable population training framework that models large cooperative populations via parameter-sharing meta-agents and incorporates mutual information regularization to preserve behavioral diversity while drastically reducing computational overhead. Experiments on the Hanabi benchmark demonstrate that ScaPT significantly outperforms prior approaches in zero-shot coordination performance. Notably, it provides the first empirical validation that increasing population size yields substantial gains in ZSC generalization capability. Moreover, ScaPT exhibits strong scalability—maintaining efficiency even as population size grows. This work establishes a novel paradigm for efficient and scalable multi-agent cooperative learning, advancing the state of the art in population-based ZSC.
📝 Abstract
Zero-shot coordination(ZSC) has become a hot topic in reinforcement learning research recently. It focuses on the generalization ability of agents, requiring them to coordinate well with collaborators that are not seen before without any fine-tuning. Population-based training has been proven to provide good zero-shot coordination performance; nevertheless, existing methods are limited by computational resources, mainly focusing on optimizing diversity in small populations while neglecting the potential performance gains from scaling population size. To address this issue, this paper proposes the Scalable Population Training (ScaPT), an efficient training framework comprising two key components: a meta-agent that efficiently realizes a population by selectively sharing parameters across agents, and a mutual information regularizer that guarantees population diversity. To empirically validate the effectiveness of ScaPT, this paper evaluates it along with representational frameworks in Hanabi and confirms its superiority.