🤖 AI Summary
Existing LLM planning research relies on simplistic environmental benchmarks, leading to overestimated capabilities and obscured safety risks. This work introduces the first fine-grained, multi-category natural-language-constrained benchmark to systematically evaluate LLMs’ planning and formalization abilities under complex semantic constraints. Experiments span four datasets, four state-of-the-art reasoning models, three formal languages, and five methodological paradigms, revealing an average performance drop of ~50% under natural-language constraints. Our contributions are threefold: (1) the first planning evaluation benchmark explicitly designed for real-world constraints; (2) empirical evidence demonstrating pronounced fragility of current LLM planners under semantic complexity and lexical diversity; and (3) critical risk insights and actionable directions for improving safety-critical planning systems.
📝 Abstract
LLMs have been widely used in planning, either as planners to generate action sequences end-to-end, or as formalizers to represent the planning domain and problem in a formal language that can derive plans deterministically. However, both lines of work rely on standard benchmarks that only include generic and simplistic environmental specifications, leading to potential overestimation of the planning ability of LLMs and safety concerns in downstream tasks. We bridge this gap by augmenting widely used planning benchmarks with manually annotated, fine-grained, and rich natural language constraints spanning four formally defined categories. Over 4 state-of-the-art reasoning LLMs, 3 formal languages, 5 methods, and 4 datasets, we show that the introduction of constraints not only consistently halves performance, but also significantly challenges robustness to problem complexity and lexical shift.