Agent-Diff: Benchmarking LLM Agents on Enterprise API Tasks via Code Execution with State-Diff-Based Evaluation

๐Ÿ“… 2026-02-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing evaluation benchmarks for large language model (LLM) agents struggle to balance ecological validity with environmental controllability. To address this, we propose a benchmarking framework centered on enterprise API tasks, which standardizes sandboxed interactions with real-world services such as Slack, Box, Linear, and Google Calendar. Task success is determined not by exact parameter matching but by a state-diff contract mechanism that evaluates changes in the environmentโ€™s state before and after execution. This design enables fair, reproducible comparisons of multiple models under consistent conditions. We evaluate nine prominent LLMs across 224 enterprise workflow tasks and conduct ablation studies that highlight the critical impact of API documentation access on agent performance. All code and data are publicly released.

Technology Category

Application Category

๐Ÿ“ Abstract
We present Agent-Diff, a novel benchmarking framework for evaluating agentic Large Language Models (LLMs) on real-world tasks that execute code via external APIs. Agentic LLM performance varies due to differences in models, external tool access, prompt structures, and agentic frameworks. Benchmarks must make fundamental trade-offs between a sandboxed approach that controls for variation in software environments and more ecologically valid approaches employing real services. Agent-Diff attempts to capture the desirable features of both of these approaches by including access to the real API interfaces for software services while sandboxing the environment in which calls are made, processed, and evaluated. This approach relies on two key innovations. The first is a novel state-diff contract, which separates process from outcome - rather than fuzzy trace or parameter matching, we define task success as whether the expected change in environment state was achieved. The second is a novel sandbox that provides a standardized scripting layer that all models use to execute code against external APIs (Slack, Box, Linear, Google Calendar). Thus, we can evaluate different agentic LLMs against a standardized set of contracts using a unified sandbox while still evaluating their performance on real-world service interfaces. Using the Agent-Diff framework, we provide benchmarks for nine LLMs across 224 tasks utilizing enterprise software workflows. In addition, we evaluate the robustness of the framework with ablation experiments to assess the contribution of access to API documentation on benchmark performance. Code and data: https://github.com/agent-diff-bench/agent-diff.
Problem

Research questions and friction points this paper is trying to address.

LLM agents
API tasks
benchmarking
code execution
state evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

state-diff evaluation
LLM agent benchmarking
API sandbox
enterprise software automation
code execution framework
๐Ÿ”Ž Similar Papers
H
Hubert M. Pysklo
Minerva University
A
Artem Zhuravel
Minerva University
Patrick D. Watson
Patrick D. Watson
Minerva University
MemoryNeural Networks