LLM+MAP: Bimanual Robot Task Planning using Large Language Models and Planning Domain Definition Language

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two core challenges in long-horizon task planning for bimanual robots: (1) tight spatiotemporal coordination between dual arms, and (2) reasoning hallucinations and logical inconsistencies inherent in large language models (LLMs). We propose a novel framework that deeply couples an LLM with a multi-agent symbolic planner. Methodologically, we integrate GPT-4o’s semantic understanding capability with our custom Multi-Agent PDDL Planner (MAP) to enable automatic task formalization, parallel bimanual action assignment, and constraint-aware re-planning. Crucially, the symbolic planner performs logical verification and correction of LLM-generated plans, ensuring 100% logical correctness and execution feasibility. Experiments on diverse complex bimanual manipulation tasks demonstrate that our approach achieves a 37% higher planning success rate and reduces average plan length by 42% compared to pure-LLM baselines (GPT-4o, GPT-4v, o1, R1), while maintaining tractable planning latency.

Technology Category

Application Category

📝 Abstract
Bimanual robotic manipulation provides significant versatility, but also presents an inherent challenge due to the complexity involved in the spatial and temporal coordination between two hands. Existing works predominantly focus on attaining human-level manipulation skills for robotic hands, yet little attention has been paid to task planning on long-horizon timescales. With their outstanding in-context learning and zero-shot generation abilities, Large Language Models (LLMs) have been applied and grounded in diverse robotic embodiments to facilitate task planning. However, LLMs still suffer from errors in long-horizon reasoning and from hallucinations in complex robotic tasks, lacking a guarantee of logical correctness when generating the plan. Previous works, such as LLM+P, extended LLMs with symbolic planners. However, none have been successfully applied to bimanual robots. New challenges inevitably arise in bimanual manipulation, necessitating not only effective task decomposition but also efficient task allocation. To address these challenges, this paper introduces LLM+MAP, a bimanual planning framework that integrates LLM reasoning and multi-agent planning, automating effective and efficient bimanual task planning. We conduct simulated experiments on various long-horizon manipulation tasks of differing complexity. Our method is built using GPT-4o as the backend, and we compare its performance against plans generated directly by LLMs, including GPT-4o, V3 and also recent strong reasoning models o1 and R1. By analyzing metrics such as planning time, success rate, group debits, and planning-step reduction rate, we demonstrate the superior performance of LLM+MAP, while also providing insights into robotic reasoning. Code is available at https://github.com/Kchu/LLM-MAP.
Problem

Research questions and friction points this paper is trying to address.

Addressing bimanual robot task planning complexity
Overcoming LLM limitations in long-horizon reasoning
Integrating LLMs with multi-agent planning for efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs with multi-agent planning
Automates bimanual task decomposition and allocation
Uses GPT-4o for robust robotic reasoning
🔎 Similar Papers
No similar papers found.
Kun Chu
Kun Chu
University of Hamburg
Large Language ModelsTask PlanningRobot LearningReinforcement Learning
X
Xufeng Zhao
Knowledge Technology Group, Department of Informatics, University of Hamburg, 22527 Hamburg, Germany
C
C. Weber
Knowledge Technology Group, Department of Informatics, University of Hamburg, 22527 Hamburg, Germany
S
Stefan Wermter
Knowledge Technology Group, Department of Informatics, University of Hamburg, 22527 Hamburg, Germany