🤖 AI Summary
Traditional robot trajectory generation methods often sacrifice motion readability—i.e., the clarity with which a robot’s intent is conveyed—to prioritize computational efficiency; meanwhile, existing readability-aware motion approaches produce only a single “most readable” trajectory, lacking continuous control over intent expressiveness. This paper introduces the first robot motion generation framework enabling **full-spectrum controllable readability modulation**, spanning from highly unambiguous to highly ambiguous trajectories. Our core contributions are: (1) an information-theoretic potential field model that quantifies readability as a differentiable metric; and (2) a two-stage diffusion architecture that decouples path planning from motion synthesis, enabling fine-grained, continuous readability control. Evaluated on 2D and 3D reaching tasks, our method generates diverse, highly controllable trajectories. It achieves significantly higher readability modulation accuracy than state-of-the-art methods, substantially improving human observers’ efficiency in inferring robotic intent.
📝 Abstract
Legibility of robot motion is critical in human-robot interaction, as it allows humans to quickly infer a robot's intended goal. Although traditional trajectory generation methods typically prioritize efficiency, they often fail to make the robot's intentions clear to humans. Meanwhile, existing approaches to legible motion usually produce only a single "most legible" trajectory, overlooking the need to modulate intent expressiveness in different contexts. In this work, we propose a novel motion generation framework that enables controllable legibility across the full spectrum, from highly legible to highly ambiguous motions. We introduce a modeling approach based on an Information Potential Field to assign continuous legibility scores to trajectories, and build upon it with a two-stage diffusion framework that first generates paths at specified legibility levels and then translates them into executable robot actions. Experiments in both 2D and 3D reaching tasks demonstrate that our approach produces diverse and controllable motions with varying degrees of legibility, while achieving performance comparable to SOTA. Code and project page: https://legibility-modulator.github.io.