🤖 AI Summary
This work addresses two core challenges in long-horizon task planning for bimanual robots: (1) tight spatiotemporal coordination between dual arms, and (2) reasoning hallucinations and logical inconsistencies inherent in large language models (LLMs). We propose a novel framework that deeply couples an LLM with a multi-agent symbolic planner. Methodologically, we integrate GPT-4o’s semantic understanding capability with our custom Multi-Agent PDDL Planner (MAP) to enable automatic task formalization, parallel bimanual action assignment, and constraint-aware re-planning. Crucially, the symbolic planner performs logical verification and correction of LLM-generated plans, ensuring 100% logical correctness and execution feasibility. Experiments on diverse complex bimanual manipulation tasks demonstrate that our approach achieves a 37% higher planning success rate and reduces average plan length by 42% compared to pure-LLM baselines (GPT-4o, GPT-4v, o1, R1), while maintaining tractable planning latency.
📝 Abstract
Bimanual robotic manipulation provides significant versatility, but also presents an inherent challenge due to the complexity involved in the spatial and temporal coordination between two hands. Existing works predominantly focus on attaining human-level manipulation skills for robotic hands, yet little attention has been paid to task planning on long-horizon timescales. With their outstanding in-context learning and zero-shot generation abilities, Large Language Models (LLMs) have been applied and grounded in diverse robotic embodiments to facilitate task planning. However, LLMs still suffer from errors in long-horizon reasoning and from hallucinations in complex robotic tasks, lacking a guarantee of logical correctness when generating the plan. Previous works, such as LLM+P, extended LLMs with symbolic planners. However, none have been successfully applied to bimanual robots. New challenges inevitably arise in bimanual manipulation, necessitating not only effective task decomposition but also efficient task allocation. To address these challenges, this paper introduces LLM+MAP, a bimanual planning framework that integrates LLM reasoning and multi-agent planning, automating effective and efficient bimanual task planning. We conduct simulated experiments on various long-horizon manipulation tasks of differing complexity. Our method is built using GPT-4o as the backend, and we compare its performance against plans generated directly by LLMs, including GPT-4o, V3 and also recent strong reasoning models o1 and R1. By analyzing metrics such as planning time, success rate, group debits, and planning-step reduction rate, we demonstrate the superior performance of LLM+MAP, while also providing insights into robotic reasoning. Code is available at https://github.com/Kchu/LLM-MAP.