🤖 AI Summary
To address constraint coupling and cascading errors in multi-constraint planning tasks for LLM-based agents, this paper proposes the Decompose–Parallel-Plan–Merge (DPPM) paradigm: constraint-aware task decomposition enables parallel subtask planning; a graph-structured merging mechanism and a conflict-aware verification module support dynamic error correction and reflective refinement. The core contributions are (i) the first multi-constraint-driven parallel planning framework, overcoming the error accumulation bottleneck inherent in sequential paradigms; and (ii) an end-to-end verifiable and correctable robust planning pipeline. Evaluated on a travel planning benchmark, DPPM achieves a 32.7% improvement in planning success rate, a 58.4% reduction in constraint violation rate, and a 41% decrease in average planning latency.
📝 Abstract
Despite significant advances in Large Language Models (LLMs), planning tasks still present challenges for LLM-based agents. Existing planning methods face two key limitations: heavy constraints and cascading errors. To address these limitations, we propose a novel parallel planning paradigm, which Decomposes, Plans for subtasks in Parallel, and Merges subplans into a final plan (DPPM). Specifically, DPPM decomposes the complex task based on constraints into subtasks, generates the subplan for each subtask in parallel, and merges them into a global plan. In addition, our approach incorporates a verification and refinement module, enabling error correction and conflict resolution. Experimental results demonstrate that DPPM significantly outperforms existing methods in travel planning tasks.