Scaling LLM Planning: NL2FLOW for Parametric Problem Generation and Rigorous Evaluation

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) face fundamental scalability and reliability bottlenecks in planning and reasoning tasks, particularly concerning data generation and evaluation. To address this, we propose NL2FLOW, the first automated system enabling unified generation from natural language to structured representations and PDDL. Our method integrates semantic mapping, parameterized generation, and regression analysis—revealing that intermediate reasoning steps can degrade performance and empirically validating the superiority of end-to-end “natural language → action” reasoning. We introduce a high-quality benchmark comprising 2,296 problems, achieving a 69% success rate in optimal plan generation and an 86% rate of valid (executable) plans. Evaluation employs rigorous, formal verification alongside multidimensional quality metrics. Empirical analysis demonstrates that model performance is jointly determined by problem structural characteristics and prompt engineering.

Technology Category

Application Category

📝 Abstract
Progress in enhancing large language model (LLM) planning and reasoning capabilities is significantly hampered by the bottleneck of scalable, reliable data generation and evaluation. To overcome this, I introduce NL2FLOW, a fully automated system for parametrically generating planning problems - expressed in natural language, a structured intermediate representation, and formal PDDL - and rigorously evaluating the quality of generated plans. I demonstrate NL2FLOW's capabilities by generating a dataset of 2296 problems in the automated workflow generation domain and evaluating multiple open-sourced, instruct-tuned LLMs. My results reveal that the highest performing models achieved 86% success in generating valid plans and 69% in generating optimal plans, specifically for problems with feasible solutions. Regression analysis shows that the influence of problem characteristics on plan generation is contingent on both model and prompt design. Notably, I observed that the highest success rate for translating natural language into a JSON representation of a plan was lower than the highest rate of generating a valid plan directly. This suggests that unnecessarily decomposing the reasoning task - introducing intermediate translation steps - may actually degrade performance, implying a benefit to models capable of reasoning directly from natural language to action. As I scale LLM reasoning to increasingly complex problems, the bottlenecks and sources of error within these systems will inevitably shift. Therefore, a dynamic understanding of these limitations - and the tools to systematically reveal them - will be crucial for unlocking the full potential of LLMs as intelligent problem solvers.
Problem

Research questions and friction points this paper is trying to address.

Scalable data generation and evaluation bottleneck for LLM planning
Automated parametric problem generation and rigorous plan evaluation
Impact of problem characteristics on LLM plan generation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated parametric problem generation system
Rigorous evaluation of LLM planning quality
Direct natural language to action reasoning