Human-to-Robot Interaction: Learning from Video Demonstration for Robot Imitation

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing video demonstration learning methods, which struggle to extract fine-grained, robot-executable instructions and suffer from poor generalization due to reliance on large amounts of paired data. To overcome these challenges, the authors propose a modular human-robot imitation learning framework featuring a two-stage decoupled architecture that separates video understanding from control policy generation. In the first stage, a vision-language model augmented with a Temporal Shift Module (TSM) identifies actions and interacting objects from unstructured videos. In the second stage, executable policies are generated using the TD3 reinforcement learning algorithm. The approach significantly enhances cross-scenario generalization, achieving BLEU-4 scores of 0.351 and 0.265 on standard and novel objects, respectively, and attains an average robotic task success rate of 87.5%, with complex pick-and-place tasks reaching up to 90%.

Technology Category

Application Category

📝 Abstract
Learning from Demonstration (LfD) offers a promising paradigm for robot skill acquisition. Recent approaches attempt to extract manipulation commands directly from video demonstrations, yet face two critical challenges: (1) general video captioning models prioritize global scene features over task-relevant objects, producing descriptions unsuitable for precise robotic execution, and (2) end-to-end architectures coupling visual understanding with policy learning require extensive paired datasets and struggle to generalize across objects and scenarios. To address these limitations, we propose a novel ``Human-to-Robot'' imitation learning pipeline that enables robots to acquire manipulation skills directly from unstructured video demonstrations, inspired by the human ability to learn by watching and imitating. Our key innovation is a modular framework that decouples the learning process into two distinct stages: (1) Video Understanding, which combines Temporal Shift Modules (TSM) with Vision-Language Models (VLMs) to extract actions and identify interacted objects, and (2) Robot Imitation, which employs TD3-based deep reinforcement learning to execute the demonstrated manipulations. We validated our approach in PyBullet simulation environments with a UR5e manipulator and in a real-world experiment with a UF850 manipulator across four fundamental actions: reach, pick, move, and put. For video understanding, our method achieves 89.97% action classification accuracy and BLEU-4 scores of 0.351 on standard objects and 0.265 on novel objects, representing improvements of 76.4% and 128.4% over the best baseline, respectively. For robot manipulation, our framework achieves an average success rate of 87.5% across all actions, with 100% success on reaching tasks and up to 90% on complex pick-and-place operations. The project website is available at https://thanhnguyencanh.github.io/LfD4hri.
Problem

Research questions and friction points this paper is trying to address.

Learning from Demonstration
Robot Imitation
Video Understanding
Generalization
Manipulation Skills
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning from Demonstration
Modular Imitation Learning
Vision-Language Models
Temporal Shift Modules
TD3 Reinforcement Learning
🔎 Similar Papers
No similar papers found.
T
Thanh Nguyen Canh
School of Information Science, Japan Advanced Institute of Science and Technology, Nomi, 923-1211, Ishikawa, Japan.
T
Thanh-Tuan Tran
University of Engineering and Technology, Vietnam National University, 10000, Hanoi, Vietnam.
H
Haolan Zhang
School of Information Science, Japan Advanced Institute of Science and Technology, Nomi, 923-1211, Ishikawa, Japan.
Ziyan Gao
Ziyan Gao
Japan Advanced Institute of Science and Technology
RoboticsMachine Learning
Nak Young Chong
Nak Young Chong
Professor of Information Science, JAIST
Robotics
X
Xiem HoangVan
University of Engineering and Technology, Vietnam National University, 10000, Hanoi, Vietnam.