Robotic Policy Learning via Human-assisted Action Preference Optimization

📅 2025-06-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language-action (VLA) robotic systems rely heavily on expert demonstrations and lack autonomous learning-from-failure capabilities, hindering real-world deployment. To address this, we propose a human-in-the-loop intervention framework and a novel Hierarchical Action Preference Optimization (HAPO) method. HAPO collects corrective trajectories via real-time human interventions, constructs an action-level binary preference model, and introduces an adaptive reweighting algorithm to effectively integrate preference signals into the VLA action generation pipeline—thereby resolving irreversible interaction errors and token-level probability mismatch. Evaluated in both simulation and on physical robot platforms, HAPO significantly improves policy robustness and generalization across tasks: average task failure rate decreases by 42%, and fault response speed increases by 3.1× compared to baseline methods.

Technology Category

Application Category

📝 Abstract
Establishing a reliable and iteratively refined robotic system is essential for deploying real-world applications. While Vision-Language-Action (VLA) models are widely recognized as the foundation model for such robotic deployment, their dependence on expert demonstrations hinders the crucial capabilities of correction and learning from failures. To mitigate this limitation, we introduce a Human-assisted Action Preference Optimization method named HAPO, designed to correct deployment failures and foster effective adaptation through preference alignment for VLA models. This method begins with a human-robot collaboration framework for reliable failure correction and interaction trajectory collection through human intervention. These human-intervention trajectories are further employed within the action preference optimization process, facilitating VLA models to mitigate failure action occurrences while enhancing corrective action adaptation. Specifically, we propose an adaptive reweighting algorithm to address the issues of irreversible interactions and token probability mismatch when introducing preference optimization into VLA models, facilitating model learning from binary desirability signals derived from interactions. Through combining these modules, our human-assisted action preference optimization method ensures reliable deployment and effective learning from failure for VLA models. The experiments conducted in simulation and real-world scenarios prove superior generalization and robustness of our framework across a variety of manipulation tasks.
Problem

Research questions and friction points this paper is trying to address.

Correcting deployment failures in Vision-Language-Action models
Enhancing corrective action adaptation through preference alignment
Addressing irreversible interactions and token probability mismatch issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-assisted Action Preference Optimization (HAPO)
Adaptive reweighting algorithm for preference optimization
Human-robot collaboration for failure correction
🔎 Similar Papers
No similar papers found.
Wenke Xia
Wenke Xia
Renmin University of China
Yichu Yang
Yichu Yang
Bytedance Research
Robotics
H
Hongtao Wu
ByteDance Seed
X
Xiao Ma
ByteDance Seed
Tao Kong
Tao Kong
ByteDance Research
Robot Foundation ModelRobot LearningComputer Vision
D
Di Hu
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing; ByteDance Seed; Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE; Beijing Key Laboratory of Research on Large Models and Intelligent Governance