🤖 AI Summary
Existing LLM planning evaluation benchmarks predominantly focus on static, single-turn scenarios, failing to capture real-world requirements for dynamic adaptation and multi-constraint trade-offs. Method: We introduce Flex-TravelPlanner, a novel benchmark featuring two paradigm-shifting evaluation principles: (1) *multi-turn dynamic constraint introduction*, and (2) *priority-aware conflict resolution*. Built upon an extended TravelPlanner dataset, it establishes an interactive, multi-round evaluation framework supporting incremental constraint addition, explicit priority annotation, and attribution analysis. Contribution/Results: Experiments reveal that state-of-the-art models—including GPT-4o and Llama 3.1 70B—exhibit significant performance degradation in multi-turn dynamic settings despite strong single-turn accuracy. Constraint ordering and priority ambiguity substantially reduce planning correctness, exposing systematic deficiencies in cross-turn adaptability and constraint prioritization. Flex-TravelPlanner thus provides a reproducible, attributable standard for rigorously evaluating dynamic planning capabilities.
📝 Abstract
Real-world planning problems require constant adaptation to changing requirements and balancing of competing constraints. However, current benchmarks for evaluating LLMs' planning capabilities primarily focus on static, single-turn scenarios. We introduce Flex-TravelPlanner, a benchmark that evaluates language models' ability to reason flexibly in dynamic planning scenarios. Building on the TravelPlanner dataset~citep{xie2024travelplanner}, we introduce two novel evaluation settings: (1) sequential constraint introduction across multiple turns, and (2) scenarios with explicitly prioritized competing constraints. Our analysis of GPT-4o and Llama 3.1 70B reveals several key findings: models' performance on single-turn tasks poorly predicts their ability to adapt plans across multiple turns; constraint introduction order significantly affects performance; and models struggle with constraint prioritization, often incorrectly favoring newly introduced lower priority preferences over existing higher-priority constraints. These findings highlight the importance of evaluating LLMs in more realistic, dynamic planning scenarios and suggest specific directions for improving model performance on complex planning tasks. The code and dataset for our framework are publicly available at https://github.com/juhyunohh/FlexTravelBench.