Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
As synthetic data becomes increasingly prevalent in AI training, critical responsibility risks—including representational bias, output controllability, and scalable validation—have intensified. This study conducts semi-structured interviews with 29 AI practitioners and responsible AI experts to empirically characterize the multi-stage integration of synthetic data across the AI lifecycle (e.g., training, evaluation)—the first such empirical investigation. Through thematic coding analysis, we identify three core challenges and propose the first engineering-oriented responsible usage roadmap. Our contributions are threefold: (1) revealing the pivotal yet constrained role of assistant generative models in synthetic data production; (2) distilling recurrent application scenarios, shared risks, and governance gaps; and (3) delivering the first actionable, practice-informed framework for policymakers and developers to operationalize responsible synthetic data use.

Technology Category

Application Category

📝 Abstract
Alongside the growth of generative AI, we are witnessing a surge in the use of synthetic data across all stages of the AI development pipeline. It is now common practice for researchers and practitioners to use one large generative model (which we refer to as an auxiliary model) to generate synthetic data that is used to train or evaluate another, reconfiguring AI workflows and reshaping the very nature of data. While scholars have raised concerns over the risks of synthetic data, policy guidance and best practices for its responsible use have not kept up with these rapidly evolving industry trends, in part because we lack a clear picture of current practices and challenges. Our work aims to address this gap. Through 29 interviews with AI practitioners and responsible AI experts, we examine the expanding role of synthetic data in AI development. Our findings reveal how auxiliary models are now widely used across the AI development pipeline. Practitioners describe synthetic data as crucial for addressing data scarcity and providing a competitive edge, noting that evaluation of generative AI systems at scale would be infeasible without auxiliary models. However, they face challenges controlling the outputs of auxiliary models, generating data that accurately depict underrepresented groups, and scaling data validation practices that are based primarily on manual inspection. We detail general limitations of and ethical considerations for synthetic data and conclude with a proposal of concrete steps towards the development of best practices for its responsible use.
Problem

Research questions and friction points this paper is trying to address.

Synthetic Data
Ethical Considerations
AI Model Training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic Data
AI Competitiveness
Ethical Considerations
🔎 Similar Papers
No similar papers found.
S
Shivani Kapania
Carnegie Mellon University, USA
S
Stephanie Ballard
Microsoft, USA
A
Alex Kessler
Microsoft, USA
Jennifer Wortman Vaughan
Jennifer Wortman Vaughan
Senior Principal Research Manager, Microsoft Research, New York City
AI TransparencyAI FairnessResponsible AIMachine LearningAlgorithmic Economics