🤖 AI Summary
Existing LLM post-training pipelines rely on manual design and optimize individual components—supervised fine-tuning (SFT), preference learning, and model merging—in isolation. Method: We propose the first fully automated, LLM-agent-driven framework for end-to-end construction and optimization of post-training pipelines. It autonomously explores combinatorial configurations of pipeline components within a joint search space spanning data selection, model architectures, and hyperparameters, guided by task-oriented evaluation feedback in a closed-loop optimization process. Contribution/Results: The framework discovers high-performing strategies overlooked by human designers, supports scalable analysis across data and model sizes, and achieves a +9.0-point improvement in tool-use accuracy while maintaining stable instruction-following capability. Experiments demonstrate significantly reduced human intervention, validating the feasibility of low-cost, high-efficiency automated pipeline tuning.
📝 Abstract
Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks. To further tailor LLMs to specific domains or applications, post-training techniques such as Supervised Fine-Tuning (SFT), Preference Learning, and model merging are commonly employed. While each of these methods has been extensively studied in isolation, the automated construction of complete post-training pipelines remains an underexplored area. Existing approaches typically rely on manual design or focus narrowly on optimizing individual components, such as data ordering or merging strategies. In this work, we introduce LaMDAgent (short for Language Model Developing Agent), a novel framework that autonomously constructs and optimizes full post-training pipelines through the use of LLM-based agents. LaMDAgent systematically explores diverse model generation techniques, datasets, and hyperparameter configurations, leveraging task-based feedback to discover high-performing pipelines with minimal human intervention. Our experiments show that LaMDAgent improves tool-use accuracy by 9.0 points while preserving instruction-following capabilities. Moreover, it uncovers effective post-training strategies that are often overlooked by conventional human-driven exploration. We further analyze the impact of data and model size scaling to reduce computational costs on the exploration, finding that model size scalings introduces new challenges, whereas scaling data size enables cost-effective pipeline discovery.