LookPlanGraph: Embodied Instruction Following Method with VLM Graph Augmentation

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Static scene graphs fail to adapt to environmental dynamics in embodied instruction following. Method: This paper proposes a dynamic scene graph modeling approach based on an online-evolving vision-language model (VLM), which parses the agent’s egocentric visual input in real time and closed-loop integrates static assets with object priors to jointly update perception, planning, and execution. Contributions/Results: (1) We introduce the first online-evolving VLM-augmented graph structure, enabling both prior validation and novel entity discovery; (2) We construct GraSIF—the first Graph Scenes for Instruction Following dataset—equipped with an automated verification framework. Evaluated on VirtualHome, OmniGibson, and a real robot platform, our method achieves a 23.6% higher task success rate than static-graph baselines under object displacement perturbations, demonstrating significantly improved robustness and generalization.

Technology Category

Application Category

📝 Abstract
Methods that use Large Language Models (LLM) as planners for embodied instruction following tasks have become widespread. To successfully complete tasks, the LLM must be grounded in the environment in which the robot operates. One solution is to use a scene graph that contains all the necessary information. Modern methods rely on prebuilt scene graphs and assume that all task-relevant information is available at the start of planning. However, these approaches do not account for changes in the environment that may occur between the graph construction and the task execution. We propose LookPlanGraph - a method that leverages a scene graph composed of static assets and object priors. During plan execution, LookPlanGraph continuously updates the graph with relevant objects, either by verifying existing priors or discovering new entities. This is achieved by processing the agents egocentric camera view using a Vision Language Model. We conducted experiments with changed object positions VirtualHome and OmniGibson simulated environments, demonstrating that LookPlanGraph outperforms methods based on predefined static scene graphs. To demonstrate the practical applicability of our approach, we also conducted experiments in a real-world setting. Additionally, we introduce the GraSIF (Graph Scenes for Instruction Following) dataset with automated validation framework, comprising 514 tasks drawn from SayPlan Office, BEHAVIOR-1K, and VirtualHome RobotHow. Project page available at https://lookplangraph.github.io .
Problem

Research questions and friction points this paper is trying to address.

Addresses dynamic environment changes in robot task planning
Enhances scene graph with real-time vision-language model updates
Improves embodied instruction following over static graph methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic scene graph updates during execution
Vision Language Model processes egocentric camera view
Continuous verification and discovery of objects