🤖 AI Summary
To address insufficient diversity in synthetic data hindering effective domain adaptation, this paper proposes a meta-prompt-driven multi-agent collaborative synthesis framework. A master model orchestrates multiple expert-level large language models (LLMs) to generate high-diversity, purely synthetic domain-specific data via meta-prompt engineering and intelligent agent orchestration. Crucially, the method requires no real-data mixing and achieves high-quality domain adaptation using only 25 million tokens. In Finance and Biomedicine domains, continual pretraining of Mistral-7B-v0.3 yields improvements of 4.08% and 13.75%, respectively, while synthetic data diversity approaches that of the original pretraining corpus. This work pioneers the integration of meta-prompting with multi-LLM agent collaboration for synthetic data generation—significantly enhancing both domain adaptation efficiency and cross-domain generalization capability.
📝 Abstract
Recent smaller language models such Phi-3.5 and Phi-4 rely on synthetic data generated using larger Language models. Questions remain about leveraging synthetic data for other use cases, such as adapting LLMs to specific domains. A key limitation of synthetic data is low diversity, which negatively impacts its downstream applicability for improving other models. To address this, we propose MetaSynth, a method for generating synthetic data that enhances diversity through meta-prompting, where a language model orchestrates multiple"expert"LLM agents to collaboratively generate data. Using only 25 million tokens of synthetic data generated with MetaSynth, we successfully adapt a well-trained LLM (Mistral-7B-v0.3) to two specialized domains-Finance and Biomedicine-without compromising the capabilities of the resulting model in general tasks. In addition, we evaluate the diversity of our synthetic data using seven automated metrics, and find that it approaches the diversity of LLM pre-training corpora. Continually pre-training Mistral-7B-v0.3 with MetaSynth notably outperforms the base LLM, showing improvements of up to 4.08% in Finance and 13.75% in Biomedicine. The same model shows degraded performance when trained on data generated using a template prompt, even when the template includes prior generations and varying In-Context exemplars of real data. Our findings suggest that a few million tokens of diverse synthetic data without mixing any real data, is sufficient for effective domain adaptation when using MetaSynth.