🤖 AI Summary
Current vision-language-action (VLA) robotic systems rely heavily on expert demonstrations and lack autonomous learning-from-failure capabilities, hindering real-world deployment. To address this, we propose a human-in-the-loop intervention framework and a novel Hierarchical Action Preference Optimization (HAPO) method. HAPO collects corrective trajectories via real-time human interventions, constructs an action-level binary preference model, and introduces an adaptive reweighting algorithm to effectively integrate preference signals into the VLA action generation pipeline—thereby resolving irreversible interaction errors and token-level probability mismatch. Evaluated in both simulation and on physical robot platforms, HAPO significantly improves policy robustness and generalization across tasks: average task failure rate decreases by 42%, and fault response speed increases by 3.1× compared to baseline methods.
📝 Abstract
Establishing a reliable and iteratively refined robotic system is essential for deploying real-world applications. While Vision-Language-Action (VLA) models are widely recognized as the foundation model for such robotic deployment, their dependence on expert demonstrations hinders the crucial capabilities of correction and learning from failures. To mitigate this limitation, we introduce a Human-assisted Action Preference Optimization method named HAPO, designed to correct deployment failures and foster effective adaptation through preference alignment for VLA models. This method begins with a human-robot collaboration framework for reliable failure correction and interaction trajectory collection through human intervention. These human-intervention trajectories are further employed within the action preference optimization process, facilitating VLA models to mitigate failure action occurrences while enhancing corrective action adaptation. Specifically, we propose an adaptive reweighting algorithm to address the issues of irreversible interactions and token probability mismatch when introducing preference optimization into VLA models, facilitating model learning from binary desirability signals derived from interactions. Through combining these modules, our human-assisted action preference optimization method ensures reliable deployment and effective learning from failure for VLA models. The experiments conducted in simulation and real-world scenarios prove superior generalization and robustness of our framework across a variety of manipulation tasks.