🤖 AI Summary
This study addresses the low transparency of dietary recommendations and consequent lack of user trust in social robots deployed in healthcare settings. We propose an explainable interaction method grounded in the cognitive mechanism of inner speech—marking the first application of human introspective verbalization to dietary guidance robots. Our approach introduces a multi-layer introspective reasoning architecture that integrates large language models (for semantic understanding) with a dietary knowledge graph (for structured nutritional inference), enabling robots to explicitly articulate their decision rationale. Experimental results demonstrate significant improvements in users’ trust and comprehension of dietary advice. A small-scale user study confirms explanation reliability at 89%, with acceptable inference latency. The core contribution lies in formalizing introspective speech—a well-established concept in cognitive science—as a computationally tractable, empirically verifiable, and anthropomorphic paradigm for explainable AI. This work establishes a novel foundation for trustworthy human–robot collaboration in health care.
📝 Abstract
We explore the use of inner speech as a mechanism to enhance transparency and trust in social robots for dietary advice. In humans, inner speech structures thought processes and decision-making; in robotics, it improves explainability by making reasoning explicit. This is crucial in healthcare scenarios, where trust in robotic assistants depends on both accurate recommendations and human-like dialogue, which make interactions more natural and engaging. Building on this, we developed a social robot that provides dietary advice, and we provided the architecture with inner speech capabilities to validate user input, refine reasoning, and generate clear justifications. The system integrates large language models for natural language understanding and a knowledge graph for structured dietary information. By making decisions more transparent, our approach strengthens trust and improves human-robot interaction in healthcare. We validated this by measuring the computational efficiency of our architecture and conducting a small user study, which assessed the reliability of inner speech in explaining the robot's behavior.