SemanticScanpath: Combining Gaze and Speech for Situated Human-Robot Interaction Using LLMs

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses semantic ambiguity in embodied human–robot interaction by introducing a novel paradigm for joint speech and gaze understanding. We propose the first gaze-to-text scanpath semantic encoding method, which transforms raw eye-tracking trajectories into structured textual sequences; these are jointly fed with speech queries into a large language model (LLM) to enable referential gaze–scene co-understanding. Our approach integrates multimodal semantic fusion, speech–gaze joint representation learning, and a robot-end real-time closed-loop execution framework. Evaluated across multiple tasks in two realistic scenarios, it achieves 92.4% accuracy—significantly outperforming baselines—and has been successfully deployed on a service robot platform for end-to-end multimodal perception-to-action execution. The core contribution lies in overcoming limitations of conventional gaze modeling: our method equips LLMs with robust capabilities to interpret naturalistic eye movements, demonstrating strong generalizability across tasks and environments.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have substantially improved the conversational capabilities of social robots. Nevertheless, for an intuitive and fluent human-robot interaction, robots should be able to ground the conversation by relating ambiguous or underspecified spoken utterances to the current physical situation and to the intents expressed non verbally by the user, for example by using referential gaze. Here we propose a representation integrating speech and gaze to enable LLMs to obtain higher situated awareness and correctly resolve ambiguous requests. Our approach relies on a text-based semantic translation of the scanpath produced by the user along with the verbal requests and demonstrates LLM's capabilities to reason about gaze behavior, robustly ignoring spurious glances or irrelevant objects. We validate the system across multiple tasks and two scenarios, showing its generality and accuracy, and demonstrate its implementation on a robotic platform, closing the loop from request interpretation to execution.
Problem

Research questions and friction points this paper is trying to address.

Resolving ambiguous spoken requests in human-robot interaction
Integrating gaze and speech for situated awareness in robots
Enhancing LLMs to interpret nonverbal cues like referential gaze
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates gaze and speech for robot interaction
Uses LLMs to resolve ambiguous verbal requests
Validated across tasks and robotic implementation
🔎 Similar Papers
No similar papers found.
E
Elisabeth Menendez
Robotics Lab, Department of Systems Engineering and Automation, Universidad Carlos III de Madrid (UC3M)
Michael Gienger
Michael Gienger
Honda Research Institute Europe
RoboticsHuman-Robot InteractionMachine Learning
S
Santiago Martínez
Robotics Lab, Department of Systems Engineering and Automation, Universidad Carlos III de Madrid (UC3M)
Carlos Balaguer
Carlos Balaguer
Full Professor, University Carlos III of Madrid
roboticshumanoidsrobohealthautomation
Anna Belardinelli
Anna Belardinelli
Principal Scientist, Honda Research Institute Europe
human-robot interactioncognitive scienceartificial intelligence