🤖 AI Summary
This work addresses the limited generalization of large code models as tool-interacting agents, which stems from low-quality synthetic data, diminishing returns from data-scale-driven expansion, and underutilization of trajectory data. To overcome these challenges, the authors propose TDScaling—a framework that optimizes the performance–cost trade-off under a fixed training budget by enhancing trajectory diversity rather than merely increasing data volume. TDScaling introduces several key innovations: a business-cluster mechanism, blueprint-driven multi-agent collaborative generation, an adaptive evolution strategy based on multidimensional entropy (spanning domains and reasoning patterns), and a sandboxed execution environment to prevent mode collapse while preserving foundational coding capabilities. Extensive experiments demonstrate that TDScaling significantly improves both tool-use generalization and intrinsic coding proficiency across multiple benchmarks, including BFCL, tau²-Bench, RebenchT, CodeCI, and BIRD, thereby validating the efficacy of diversity-driven scaling.
📝 Abstract
As code large language models (LLMs) evolve into tool-interactive agents via the Model Context Protocol (MCP), their generalization is increasingly limited by low-quality synthetic data and the diminishing returns of quantity scaling. Moreover, quantity-centric scaling exhibits an early bottleneck that underutilizes trajectory data. We propose TDScaling, a Trajectory Diversity Scaling-based data synthesis framework for code agents that scales performance through diversity rather than raw volume. Under a fixed training budget, increasing trajectory diversity yields larger gains than adding more trajectories, improving the performance-cost trade-off for agent training. TDScaling integrates four innovations: (1) a Business Cluster mechanism that captures real-service logical dependencies; (2) a blueprint-driven multi-agent paradigm that enforces trajectory coherence; (3) an adaptive evolution mechanism that steers synthesis toward long-tail scenarios using Domain Entropy, Reasoning Mode Entropy, and Cumulative Action Complexity to prevent mode collapse; and (4) a sandboxed code tool that mitigates catastrophic forgetting of intrinsic coding capabilities. Experiments on general tool-use benchmarks (BFCL, tau^2-Bench) and code agent tasks (RebenchT, CodeCI, BIRD) demonstrate a win-win outcome: TDScaling improves both tool-use generalization and inherent coding proficiency. We plan to release the full codebase and the synthesized dataset (including 30,000+ tool clusters) upon publication.