π€ AI Summary
This work addresses the limited fine-tuning performance of function-calling large language models (LLMs) under scarcity of real user interaction data. We propose a learnable router-based multimodal synthetic data generation framework that jointly leverages text-to-text and vision-to-text LLMs, structured knowledge graphs, and domain-specific metadata. A dynamic routing mechanism orchestrates multiple heterogeneous generation pathways to ensure high-fidelity alignment of synthetic data with respect to semantic diversity, task complexity, and real-world distribution. Evaluated on natural language-to-API mapping in digital content creation, our generated data significantly improves fine-tuned modelsβ function classification accuracy and parameter selection precision. It consistently outperforms existing synthetic data approaches, establishing a new state-of-the-art benchmark for function-calling tasks.
π Abstract
This paper addresses fine-tuning Large Language Models (LLMs) for function calling tasks when real user interaction data is unavailable. In digital content creation tools, where users express their needs through natural language queries that must be mapped to API calls, the lack of real-world task-specific data and privacy constraints for training on it necessitate synthetic data generation. Existing approaches to synthetic data generation fall short in diversity and complexity, failing to replicate real-world data distributions and leading to suboptimal performance after LLM fine-tuning. We present a novel router-based architecture that leverages domain resources like content metadata and structured knowledge graphs, along with text-to-text and vision-to-text language models to generate high-quality synthetic training data. Our architecture's flexible routing mechanism enables synthetic data generation that matches observed real-world distributions, addressing a fundamental limitation of traditional approaches. Evaluation on a comprehensive set of real user queries demonstrates significant improvements in both function classification accuracy and API parameter selection. Models fine-tuned with our synthetic data consistently outperform traditional approaches, establishing new benchmarks for function calling tasks.