🤖 AI Summary
This work addresses the limited visual perspective-taking (VPT) capability in embodied cognition—specifically, the challenge of accurate Z-axis distance estimation—essential for six-degree-of-freedom (6-DOF) spatial understanding and human–robot interaction (HRI). We propose the first VPT-supervised training framework tailored for embodied cognition, built upon NVIDIA Omniverse. It introduces the first synthetic spatial reasoning dataset featuring ground-truth 4×4 pose matrices and natural-language descriptions. Our method jointly models RGB images and geometric transformation matrices, leveraging vision-language models (VLMs) to learn spatial relationships. Experiments demonstrate significant improvements in Z-axis distance estimation accuracy. The dataset is publicly released, establishing a scalable benchmark and technical foundation for advancing embodied AI’s spatial reasoning capabilities in real-world HRI scenarios.
📝 Abstract
We present a conceptual framework for training Vision-Language Models (VLMs) to perform Visual Perspective Taking (VPT), a core capability for embodied cognition essential for Human-Robot Interaction (HRI). As a first step toward this goal, we introduce a synthetic dataset, generated in NVIDIA Omniverse, that enables supervised learning for spatial reasoning tasks. Each instance includes an RGB image, a natural language description, and a ground-truth 4X4 transformation matrix representing object pose. We focus on inferring Z-axis distance as a foundational skill, with future extensions targeting full 6 Degrees Of Freedom (DOFs) reasoning. The dataset is publicly available to support further research. This work serves as a foundational step toward embodied AI systems capable of spatial understanding in interactive human-robot scenarios.