🤖 AI Summary
Current AI agent supervised fine-tuning (SFT) suffers from heterogeneous, fragmented training data—diverse in origin, format, and schema—leading to high integration costs and hindering standardized, scalable training. To address this, we propose the Agent Data Protocol (ADP), a lightweight, general-purpose structured representation language. ADP unifies 13 heterogeneous agent datasets—including API invocation, code generation, and web interaction—into a single, semantically consistent intermediate format, eliminating per-dataset engineering. A modular parsing and conversion toolchain enables seamless export of training-ready data for mainstream agent frameworks. Extensive large-scale SFT evaluation demonstrates that ADP improves baseline model performance by ~20% on average across standard benchmarks, achieving state-of-the-art (SOTA) or near-SOTA results without domain-specific tuning. All code and data are publicly released.
📝 Abstract
Public research results on large-scale supervised finetuning of AI agents remain relatively rare, since the collection of agent training data presents unique challenges. In this work, we argue that the bottleneck is not a lack of underlying data sources, but that a large variety of data is fragmented across heterogeneous formats, tools, and interfaces. To this end, we introduce the agent data protocol (ADP), a light-weight representation language that serves as an"interlingua"between agent datasets in diverse formats and unified agent training pipelines downstream. The design of ADP is expressive enough to capture a large variety of tasks, including API/tool use, browsing, coding, software engineering, and general agentic workflows, while remaining simple to parse and train on without engineering at a per-dataset level. In experiments, we unified a broad collection of 13 existing agent training datasets into ADP format, and converted the standardized ADP data into training-ready formats for multiple agent frameworks. We performed SFT on these data, and demonstrated an average performance gain of ~20% over corresponding base models, and delivers state-of-the-art or near-SOTA performance on standard coding, browsing, tool use, and research benchmarks, without domain-specific tuning. All code and data are released publicly, in the hope that ADP could help lower the barrier to standardized, scalable, and reproducible agent training.