ROTATE: Regret-driven Open-ended Training for Ad Hoc Teamwork

๐Ÿ“… 2025-05-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the generalization challenge in Ad Hoc Teamwork (AHT), where agents must collaborate effectively with previously unseen teammates. We propose the first regret-minimization-based bidirectional open-ended evolution framework for AHT. Unlike conventional two-stage paradigms that fix teammate policies during training, our method jointly optimizes both the AHT agent and an adversarial teammate generator: the latter dynamically produces โ€œweak teammatesโ€ that expose agent vulnerabilities, while the former updates its policy across diverse environments. To align training distribution with true generalization objectives, we introduce online behavioral coverage evaluation as a guiding signal. Evaluated on multiple AHT benchmarks, our approach significantly outperforms state-of-the-art methods, achieving an average 27.4% improvement in collaboration success rate with unseen test teammates. Moreover, it enhances both generalization robustness and behavioral coverage.

Technology Category

Application Category

๐Ÿ“ Abstract
Developing AI agents capable of collaborating with previously unseen partners is a fundamental generalization challenge in multi-agent learning, known as Ad Hoc Teamwork (AHT). Existing AHT approaches typically adopt a two-stage pipeline, where first, a fixed population of teammates is generated with the idea that they should be representative of the teammates that will be seen at deployment time, and second, an AHT agent is trained to collaborate well with agents in the population. To date, the research community has focused on designing separate algorithms for each stage. This separation has led to algorithms that generate teammate pools with limited coverage of possible behaviors, and that ignore whether the generated teammates are easy to learn from for the AHT agent. Furthermore, algorithms for training AHT agents typically treat the set of training teammates as static, thus attempting to generalize to previously unseen partner agents without assuming any control over the distribution of training teammates. In this paper, we present a unified framework for AHT by reformulating the problem as an open-ended learning process between an ad hoc agent and an adversarial teammate generator. We introduce ROTATE, a regret-driven, open-ended training algorithm that alternates between improving the AHT agent and generating teammates that probe its deficiencies. Extensive experiments across diverse AHT environments demonstrate that ROTATE significantly outperforms baselines at generalizing to an unseen set of evaluation teammates, thus establishing a new standard for robust and generalizable teamwork.
Problem

Research questions and friction points this paper is trying to address.

Developing AI agents for collaboration with unseen partners in Ad Hoc Teamwork (AHT).
Overcoming limited behavior coverage in teammate generation for AHT.
Improving generalization to unseen teammates through regret-driven open-ended training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for Ad Hoc Teamwork
Regret-driven open-ended training algorithm
Alternates improving agent and generating teammates
๐Ÿ”Ž Similar Papers
No similar papers found.