🤖 AI Summary
This study investigates large language models’ (LLMs) capability for procedural step ordering—specifically, reconstructing globally correct sequences from shuffled steps. Using recipes as a canonical procedural task, we introduce the first comprehensive evaluation framework for step ordering, integrating three complementary metrics: Kendall rank correlation coefficient, normalized longest common subsequence (NLCS), and edit distance. We systematically benchmark state-of-the-art LLMs under zero-shot and few-shot settings. Results reveal a pronounced performance degradation with increasing sequence length and shuffle intensity, exposing inherent limitations in modeling long-range sequential dependencies for structured procedural reasoning. Our work establishes the first multi-granular, interpretable evaluation paradigm for ordered-step reasoning, providing both a rigorous benchmark and actionable insights for advancing LLMs’ structured reasoning capabilities.
📝 Abstract
Reasoning over procedural sequences, where the order of steps directly impacts outcomes, is a critical capability for large language models (LLMs). In this work, we study the task of reconstructing globally ordered sequences from shuffled procedural steps, using a curated dataset of food recipes, a domain where correct sequencing is essential for task success. We evaluate several LLMs under zero-shot and few-shot settings and present a comprehensive evaluation framework that adapts established metrics from ranking and sequence alignment. These include Kendall's Tau, Normalized Longest Common Subsequence (NLCS), and Normalized Edit Distance (NED), which capture complementary aspects of ordering quality. Our analysis shows that model performance declines with increasing sequence length, reflecting the added complexity of longer procedures. We also find that greater step displacement in the input, corresponding to more severe shuffling, leads to further degradation. These findings highlight the limitations of current LLMs in procedural reasoning, especially with longer and more disordered inputs.