🤖 AI Summary
In human-robot collaboration, human resistance to robotic autonomy—driven by perceived risk and insufficient trust—hampers team effectiveness and user acceptance. To address this, we propose a dynamically adjustable shared autonomy framework centered on actively modeling, sustaining, and repairing trust—marking the first shift from passive trust adaptation to active trust regulation. Our approach introduces a novel online Bayesian trust model grounded in temporal relational events, eliminating reliance on prior behavioral labels. It integrates relational event modeling, Bayesian learning, and a shared control architecture. In search-and-rescue collaborative experiments, our framework significantly improves task completion efficiency (+23.6%) and subjective trust ratings (p < 0.01) compared to a trust-agnostic baseline, while jointly optimizing objective performance and user acceptance.
📝 Abstract
Shared autonomy functions as a flexible framework that empowers robots to operate across a spectrum of autonomy levels, allowing for efficient task execution with minimal human oversight. However, humans might be intimidated by the autonomous decision-making capabilities of robots due to perceived risks and a lack of trust. This letter proposed a trust-preserved shared autonomy strategy that allows robots to seamlessly adjust their autonomy level, striving to optimize team performance and enhance their acceptance among human collaborators. By enhancing the relational event modeling framework with Bayesian learning techniques, this letter enables dynamic inference of human trust based solely on time-stamped relational events communicated within human-robot teams. Adopting a longitudinal perspective on trust development and calibration in human-robot teams, the proposed trust-preserved shared autonomy strategy warrants robots to actively establish, maintain, and repair human trust, rather than merely passively adapting to it. We validate the effectiveness of the proposed approach through a user study on a human-robot collaborative search and rescue scenario. The objective and subjective evaluations demonstrate its merits on both task execution and user acceptability over the baseline approach that does not consider the preservation of trust.