🤖 AI Summary
Existing robotic learning environments inadequately evaluate agents’ goal inference and adaptation capabilities under dynamic objectives, shifting human preferences, and human-robot collaborative settings. To address this, we propose ColorGrid—the first multi-agent reinforcement learning (MARL) benchmark supporting controllable non-stationarity, role asymmetry, and sparse heterogeneous rewards. Within a MARL framework, we systematically demonstrate that state-of-the-art algorithms—including IPPO—exhibit severe performance degradation under synchronous non-stationary and asymmetric goal conditions, exposing fundamental limitations for real-world collaborative tasks. ColorGrid enables parameterized environment configuration, trajectory visualization, and full experimental reproducibility. We open-source the complete codebase, pre-trained models, and analysis tools. This work establishes a rigorous, scalable, and goal-aware evaluation platform for cooperative intelligent agents, advancing benchmarking standards for adaptive, human-aligned MARL.
📝 Abstract
Autonomous agents' interactions with humans are increasingly focused on adapting to their changing preferences in order to improve assistance in real-world tasks. Effective agents must learn to accurately infer human goals, which are often hidden, to collaborate well. However, existing Multi-Agent Reinforcement Learning (MARL) environments lack the necessary attributes required to rigorously evaluate these agents' learning capabilities. To this end, we introduce ColorGrid, a novel MARL environment with customizable non-stationarity, asymmetry, and reward structure. We investigate the performance of Independent Proximal Policy Optimization (IPPO), a state-of-the-art (SOTA) MARL algorithm, in ColorGrid and find through extensive ablations that, particularly with simultaneous non-stationary and asymmetric goals between a ``leader'' agent representing a human and a ``follower'' assistant agent, ColorGrid is unsolved by IPPO. To support benchmarking future MARL algorithms, we release our environment code, model checkpoints, and trajectory visualizations at https://github.com/andreyrisukhin/ColorGrid.