Linguistic and Argument Diversity in Synthetic Data for Function-Calling Agents

📅 2026-01-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the scarcity of high-quality, diverse synthetic data for training function-calling agents, particularly in terms of linguistic variation and parameter coverage. It proposes a general-purpose diversity optimization method that systematically incorporates both language diversity and parameter coverage into synthetic data generation, without relying on handcrafted rules or predefined taxonomies. By jointly optimizing intrinsic diversity metrics and extrinsic task performance, the approach generates data that maintains high correctness while significantly enhancing diversity. Experimental results demonstrate that models trained on this data achieve a 7.4% improvement in out-of-distribution accuracy on the BFCL benchmark, confirming the effectiveness and generalization capability of the proposed method.

Technology Category

Application Category

📝 Abstract
The construction of function calling agents has emerged as a promising avenue for extending model capabilities. A major challenge for this task is obtaining high quality diverse data for training. Prior work emphasizes diversity in functions, invocation patterns, and interaction turns, yet linguistic diversity of requests and coverage of arguments (e.g., \texttt{city\_name}, \texttt{stock\_ticker}) remain underexplored. We propose a method that generates synthetic datasets via optimizing general-purpose diversity metrics across both queries and arguments, without relying on hand-crafted rules or taxonomies, making it robust to different usecases. We demonstrate the effectiveness of our technique via both intrinsic and extrinsic testing, comparing it to SoTA data generation methods. We show a superiority over baselines in terms of diversity, while keeping comparable correctness. Additionally, when used as a training set, the model resulting from our dataset exhibits superior performance compared to analogous models based on the baseline data generation methods in out-of-distribution performance. In particular, we achieve an $7.4\%$ increase in accuracy on the BFCL benchmark compared to similar counterparts.
Problem

Research questions and friction points this paper is trying to address.

linguistic diversity
argument coverage
synthetic data
function-calling agents
training data diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

synthetic data generation
linguistic diversity
argument coverage
function-calling agents
out-of-distribution generalization
🔎 Similar Papers
No similar papers found.
D
Dan Greenstein
Technion, Haifa Israel
Z
Zohar S. Karnin
TII, Haifa, Israel
C
Chen Amiraz
TII, Haifa, Israel
Oren Somekh
Oren Somekh
Technology Innovation Institute
Recommendation SystemsOnline AdvertisingMachine LearningLLM RAG