SOP-Bench: Complex Industrial SOPs for Evaluating LLM Agents

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation benchmarks lack publicly available, industrial-standard SOP-aligned resources that capture the structural, constraint-driven, and domain-specific nature of real-world operational procedures—hindering rigorous assessment of LLMs’ planning and execution capabilities in long-horizon, multi-tool, rule-intensive tasks. Method: We introduce SOP-Bench, the first benchmark explicitly designed for industrial SOPs. It features an original synthetic SOP generation framework, yielding an open-source dataset of 1,800+ tasks across 10 industrial domains. We formalize API/tool interface modeling and incorporate human-validated test protocols, and conduct systematic evaluation using Function-Calling and ReAct agents. Contribution/Results: Experiments reveal critically low success rates (27%–48%) across models; tool misuse escalates sharply with toolset scale—approaching 100%—and performance varies significantly across domains. These findings underscore the urgent need for domain-adaptive fine-tuning and architectural improvements to support robust SOP-constrained reasoning.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) demonstrate impressive general-purpose reasoning and problem-solving abilities. However, they struggle with executing complex, long-horizon workflows that demand strict adherence to Standard Operating Procedures (SOPs), a critical requirement for real-world industrial automation. Despite this need, there is a lack of public benchmarks that reflect the complexity, structure, and domain-specific nuances of SOPs. To address this, we present three main contributions. First, we introduce a synthetic data generation framework to create realistic, industry-grade SOPs that rigorously test the planning, reasoning, and tool-use capabilities of LLM-based agents. Second, using this framework, we develop SOP-Bench, a benchmark of over 1,800 tasks across 10 industrial domains, each with APIs, tool interfaces, and human-validated test cases. Third, we evaluate two prominent agent architectures: Function-Calling and ReAct Agents, on SOP-Bench, observing average success rates of only 27% and 48%, respectively. Remarkably, when the tool registry is much larger than necessary, agents invoke incorrect tools nearly 100% of the time. These findings underscore a substantial gap between current agentic capabilities of LLMs and the demands of automating real-world SOPs. Performance varies significantly by task and domain, highlighting the need for domain-specific benchmarking and architectural choices before deployment. SOP-Bench is publicly available at http://sop-bench.s3-website-us-west-2.amazonaws.com/. We also release the prompts underpinning the data generation framework to support new domain-specific SOP benchmarks. We invite the community to extend SOP-Bench with SOPs from their industrial domains.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with complex industrial SOP workflows
Lack of public benchmarks for SOP complexity
Agents perform poorly on domain-specific SOP tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic data framework for industrial SOPs
SOP-Bench with 1800 tasks across domains
Evaluation of Function-Calling and ReAct Agents
🔎 Similar Papers
No similar papers found.
S
Subhrangshu Nandi
Applied AI, Amazon
A
Arghya Datta
Applied AI, Amazon
N
Nikhil Vichare
Applied AI, Amazon
I
Indranil Bhattacharya
Applied AI, Amazon
H
Huzefa Raja
Applied AI, Amazon
J
Jing Xu
Applied AI, Amazon
S
Shayan Ray
Applied AI, Amazon
Giuseppe Carenini
Giuseppe Carenini
Professor of Computer Science, University of British Columbia
Artificial IntelligenceNatural Language ProcessingIntelligent User InterfacesInformation Visualization
A
Abhi Srivastava
Applied AI, Amazon
Aaron Chan
Aaron Chan
Sahara AI
Machine LearningLarge Language ModelsAI AgentsDecentralized AI
M
Man Ho Woo
Applied AI, Amazon
A
Amar Kandola
Applied AI, Amazon
B
Brandon Theresa
Applied AI, Amazon
F
Francesco Carbone
Applied AI, Amazon