🤖 AI Summary
Existing methods rely on large language models (LLMs) or vision-language models (VLMs) to generate supervision signals for embodied tasks, but suffer from limited representational capacity of text/code for complex scenes or restricted output modalities in VLMs, hindering fine-grained, spatiotemporally continuous supervision. Method: We propose the first supervision synthesis framework based on general-purpose video generation models (e.g., Stable Video Diffusion), which, given an initial simulation frame and a natural language instruction, synthesizes a semantically correct task-completion video and decodes multimodal supervision—including 6D object poses, 2D instance segmentation, and depth maps—from it. Contribution/Results: Our approach overcomes the input-output modality bottlenecks of LLMs/VLMs, enabling fully automated, large-scale simulation-based policy training without manual annotation. Experiments demonstrate substantially improved supervision quality for complex tasks involving multi-step manipulation and occluded interactions, more efficient end-to-end policy learning, and superior generalization compared to state-of-the-art reward modeling approaches.
📝 Abstract
Automatically generating training supervision for embodied tasks is crucial, as manual designing is tedious and not scalable. While prior works use large language models (LLMs) or vision-language models (VLMs) to generate rewards, these approaches are largely limited to simple tasks with well-defined rewards, such as pick-and-place. This limitation arises because LLMs struggle to interpret complex scenes compressed into text or code due to their restricted input modality, while VLM-based rewards, though better at visual perception, remain limited by their less expressive output modality. To address these challenges, we leverage the imagination capability of general-purpose video generation models. Given an initial simulation frame and a textual task description, the video generation model produces a video demonstrating task completion with correct semantics. We then extract rich supervisory signals from the generated video, including 6D object pose sequences, 2D segmentations, and estimated depth, to facilitate task learning in simulation. Our approach significantly improves supervision quality for complex embodied tasks, enabling large-scale training in simulators.