π€ AI Summary
To address the insufficient integration of multimodal inputs and counterfactual reasoning in embodied AI, this paper proposes LLaPaβa unified framework for procedural planning. First, it introduces a Task-Environment Re-ranking module (TER) to construct a task-sensitive multimodal feature space. Second, it designs a Counterfactual Activity Retrieval module (CAR) to explicitly model action feasibility under anomalous conditions. LLaPa end-to-end integrates vision-language modeling, task-oriented segmentation, cross-modal feature alignment, and counterfactual retrieval to generate executable action sequences. Evaluated on ActPlan-1K and ALFRED benchmarks, LLaPa achieves state-of-the-art performance in planning quality, outperforming prior methods in Longest Common Subsequence (LCS) and execution accuracy. This work pioneers counterfactual-aware multimodal procedural planning and releases both code and models publicly.
π Abstract
While large language models (LLMs) have advanced procedural planning for embodied AI systems through strong reasoning abilities, the integration of multimodal inputs and counterfactual reasoning remains underexplored. To tackle these challenges, we introduce LLaPa, a vision-language model framework designed for multimodal procedural planning. LLaPa generates executable action sequences from textual task descriptions and visual environmental images using vision-language models (VLMs). Furthermore, we enhance LLaPa with two auxiliary modules to improve procedural planning. The first module, the Task-Environment Reranker (TER), leverages task-oriented segmentation to create a task-sensitive feature space, aligning textual descriptions with visual environments and emphasizing critical regions for procedural execution. The second module, the Counterfactual Activities Retriever (CAR), identifies and emphasizes potential counterfactual conditions, enhancing the model's reasoning capability in counterfactual scenarios. Extensive experiments on ActPlan-1K and ALFRED benchmarks demonstrate that LLaPa generates higher-quality plans with superior LCS and correctness, outperforming advanced models. The code and models are available https://github.com/sunshibo1234/LLaPa.