Few-shot_LLM_Synthetic_Data_with_Distribution_Matching

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often generate synthetic data exhibiting distributional shifts from real data in critical linguistic attributes—such as style and tone—leading to distorted data distributions and degraded downstream performance when naively mixed. To address this, we propose SynAlign, the first framework to explicitly align the distributions of key linguistic attributes in few-shot synthetic data generation. SynAlign employs Gaussian processes to model uncertainty for principled high-quality demonstration selection, integrates implicit attribute inference with Maximum Mean Discrepancy (MMD) to enforce distributional alignment between synthetic and real data, and introduces a reweighting-based filtering mechanism to enhance synthetic data quality. Extensive experiments across multiple text prediction tasks demonstrate that SynAlign significantly improves the performance of small-scale models. Furthermore, online A/B testing confirms its robust effectiveness in improving retrieval system accuracy.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) advance, their ability to perform in-context learning and few-shot language generation has improved significantly. This has spurred using LLMs to produce high-quality synthetic data to enhance the performance of smaller models like online retrievers or weak LLMs. However, LLM-generated synthetic data often differs from the real data in key language attributes (e.g., styles, tones, content proportions, etc.). As a result, mixing these synthetic data directly with real data may distort the original data distribution, potentially hindering performance improvements. To solve this, we introduce SynAlign: a synthetic data generation and filtering framework based on key attribute distribution matching. Before generation, SynAlign employs an uncertainty tracker surrogated by the Gaussian Process model to iteratively select data clusters distinct from selected ones as demonstrations for new data synthesis, facilitating the efficient exploration diversity of the real data. Then, a latent attribute reasoning method is employed: the LLM summarizes linguistic attributes of demonstrations and then synthesizes new data based on them. This approach facilitates synthesizing diverse data with linguistic attributes that appear in real data.After generation, the Maximum Mean Discrepancy is used as the objective function to learn the sampling weight of each synthetic data, ensuring distribution matching with the real data. Our experiments on multiple text prediction tasks show significant performance improvements. We also conducted an online A/B test on an online retriever to demonstrate SynAlign's effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Improve synthetic data quality from LLMs
Match synthetic data distribution with real data
Enhance performance of smaller models using synthetic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian Process model
latent attribute reasoning
Maximum Mean Discrepancy
🔎 Similar Papers
No similar papers found.