LLaPa: A Vision-Language Model Framework for Counterfactual-Aware Procedural Planning

πŸ“… 2025-07-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the insufficient integration of multimodal inputs and counterfactual reasoning in embodied AI, this paper proposes LLaPaβ€”a unified framework for procedural planning. First, it introduces a Task-Environment Re-ranking module (TER) to construct a task-sensitive multimodal feature space. Second, it designs a Counterfactual Activity Retrieval module (CAR) to explicitly model action feasibility under anomalous conditions. LLaPa end-to-end integrates vision-language modeling, task-oriented segmentation, cross-modal feature alignment, and counterfactual retrieval to generate executable action sequences. Evaluated on ActPlan-1K and ALFRED benchmarks, LLaPa achieves state-of-the-art performance in planning quality, outperforming prior methods in Longest Common Subsequence (LCS) and execution accuracy. This work pioneers counterfactual-aware multimodal procedural planning and releases both code and models publicly.

Technology Category

Application Category

πŸ“ Abstract
While large language models (LLMs) have advanced procedural planning for embodied AI systems through strong reasoning abilities, the integration of multimodal inputs and counterfactual reasoning remains underexplored. To tackle these challenges, we introduce LLaPa, a vision-language model framework designed for multimodal procedural planning. LLaPa generates executable action sequences from textual task descriptions and visual environmental images using vision-language models (VLMs). Furthermore, we enhance LLaPa with two auxiliary modules to improve procedural planning. The first module, the Task-Environment Reranker (TER), leverages task-oriented segmentation to create a task-sensitive feature space, aligning textual descriptions with visual environments and emphasizing critical regions for procedural execution. The second module, the Counterfactual Activities Retriever (CAR), identifies and emphasizes potential counterfactual conditions, enhancing the model's reasoning capability in counterfactual scenarios. Extensive experiments on ActPlan-1K and ALFRED benchmarks demonstrate that LLaPa generates higher-quality plans with superior LCS and correctness, outperforming advanced models. The code and models are available https://github.com/sunshibo1234/LLaPa.
Problem

Research questions and friction points this paper is trying to address.

Integrate multimodal inputs for procedural planning
Enhance counterfactual reasoning in AI systems
Generate executable action sequences from text and images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses vision-language models for multimodal planning
Integrates task-environment reranker for alignment
Employs counterfactual retriever for enhanced reasoning
πŸ”Ž Similar Papers
No similar papers found.
S
Shibo Sun
Harbin Institute of Technology, Harbin, China
X
Xue Li
Harbin Institute of Technology, Harbin, China
Donglin Di
Donglin Di
Li Auto Inc.
Generative ModelsEmbodied AIMedical ImageMultimedia
Mingjie Wei
Mingjie Wei
xidian university
3D HumanMotion generation3D human pose estimation
Lanshun Nie
Lanshun Nie
Harbin Institute of Technology
Internet of ThingsReal Time System
Wei-Nan Zhang
Wei-Nan Zhang
Harbin Institute of Technology, Harbin, China
D
Dechen Zhan
Harbin Institute of Technology, Harbin, China
Y
Yang Song
University of New South Wales, Sydney, New South Wales, Australia
L
Lei Fan
University of New South Wales, Sydney, New South Wales, Australia