🤖 AI Summary
Current social robot navigation (SRN) lacks standardized, quantitative metrics for evaluating human-robot collaborative behaviors—particularly in prototypical interaction scenarios such as frontal approaches—making it difficult to distinguish proactive cooperation from reactive avoidance, thereby hindering objective assessment of social compliance and safety. To address this, we propose two novel metrics: Conflict Intensity, which quantifies geometric and dynamic characteristics of trajectory conflicts, and Responsibility Attribution, which computationally models responsibility assignment by predicting human and robot intentions to identify the primary agent initiating avoidance. This constitutes the first computationally grounded framework for quantifying human-robot responsibility distribution. Experiments demonstrate that our metric suite effectively discriminates subtle differences in cooperative behavior across state-of-the-art navigation algorithms, significantly enhancing the discriminability and interpretability of social compliance evaluation in standard benchmarks. The proposed metrics provide a reproducible, comparable, and principled quantitative benchmark for SRN algorithm design and validation.
📝 Abstract
Establishing standardized metrics for Social Robot Navigation (SRN) algorithms for assessing the quality and social compliance of robot behavior around humans is essential for SRN research. Currently, commonly used evaluation metrics lack the ability to quantify how cooperative an agent behaves in interaction with humans. Concretely, in a simple frontal approach scenario, no metric specifically captures if both agents cooperate or if one agent stays on collision course and the other agent is forced to evade. To address this limitation, we propose two new metrics, a conflict intensity metric and the responsibility metric. Together, these metrics are capable of evaluating the quality of human-robot interactions by showing how much a given algorithm has contributed to reducing a conflict and which agent actually took responsibility of the resolution. This work aims to contribute to the development of a comprehensive and standardized evaluation methodology for SRN, ultimately enhancing the safety, efficiency, and social acceptance of robots in human-centric environments.