🤖 AI Summary
To address safety and ethical risks arising from mismatches between human trust and robotic capabilities in human–robot interaction (HRI), this paper proposes an interaction-centered framework for trustworthy robotics. The framework rests on two foundational pillars—human awareness and transparency—and comprises four integrated components: human–robot intent recognition, explainable behavior modeling, transparent communication, and context-adaptive feedback. Unlike static trust assessment approaches, it introduces a novel real-time trust alignment mechanism that dynamically bridges the gap between perceived human trust and actual robotic competence. By synergistically integrating cognitive modeling with interaction design principles, the framework ensures trust to be explainable, adjustable, and context-sensitive. The work yields a theoretically grounded, actionable framework for designing trustworthy robots, offering a systematic solution to advance safe, ethical, and efficient human–robot collaboration.
📝 Abstract
As robots get more integrated into human environments, fostering trustworthiness in embodied robotic agents becomes paramount for an effective and safe human-robot interaction (HRI). To achieve that, HRI applications must promote human trust that aligns with robot skills and avoid misplaced trust or overtrust, which can pose safety risks and ethical concerns. To achieve that, HRI applications must promote human trust that aligns with robot skills and avoid misplaced trust or overtrust, which can pose safety risks and ethical concerns. In this position paper, we outline an interaction-based framework for building trust through mutual understanding between humans and robots. We emphasize two main pillars: human awareness and transparency, referring to the robot ability to interpret human actions accurately and to clearly communicate its intentions and goals, respectively. By integrating these two pillars, robots can behave in a manner that aligns with human expectations and needs while providing their human partners with both comprehension and control over their actions. We also introduce four components that we think are important for bridging the gap between a human-perceived sense of trust and a robot true capabilities.