How Good Are Synthetic Requirements ? Evaluating LLM-Generated Datasets for AI4RE

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-quality annotated data is scarce in requirements engineering (RE), severely hindering the advancement of AI for RE (AI4RE). To address this, we propose Synthline v1—a systematic framework for generating high-fidelity synthetic requirements data. It integrates few-shot prompting, PACE (Prompt Actor-Critic Editing) for automated prompt optimization, and semantic-similarity-based post-hoc filtering. This work presents the first systematic evaluation of these techniques across four critical RE classification tasks: defect, functional, quality, and security requirements. Experimental results demonstrate that data generated by Synthline v1 achieves F1 scores 15.4 and 7.8 percentage points higher than human-annotated ground truth on defect and security classification, respectively, and yields consistent improvements of 6–44 percentage points across all four tasks. These findings robustly validate Synthline v1’s effectiveness and superiority. Moreover, Synthline v1 establishes a reproducible, controllable, and high-quality paradigm for synthetic data generation in AI4RE.

Technology Category

Application Category

📝 Abstract
The shortage of publicly available, labeled requirements datasets remains a major barrier to advancing Artificial Intelligence for Requirements Engineering (AI4RE). While Large Language Models offer promising capabilities for synthetic data generation, systematic approaches to control and optimize the quality of generated requirements remain underexplored. This paper presents Synthline v1, an enhanced Product Line approach for generating synthetic requirements data that extends our earlier v0 version with advanced generation strategies and curation techniques. We investigate four research questions assessing how prompting strategies, automated prompt optimization, and post-generation curation affect data quality across four classification tasks: defect detection, functional vs. non-functional, quality vs. non-quality, and security vs. non-security. Our evaluation shows that multi-sample prompting significantly boosts both utility and diversity over single-sample generation, with F1-score gains from 6 to 44 points. The use of PACE (Prompt Actor-Critic Editing) for automated prompt optimization yields task-dependent results, greatly improving functional classification (+32.5 points) but reducing performance on others. Interestingly, similarity-based curation improves diversity but often harms classification performance, indicating that some redundancy may help ML models. Most importantly, our results show that synthetic requirements can match or outperform human-authored ones for specific tasks, with synthetic data surpassing human data for security (+7.8 points) and defect classification (+15.4 points). These findings offer practical insights for AI4RE and chart a viable path to mitigating dataset scarcity through systematic synthetic generation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating quality of LLM-generated synthetic requirements datasets
Optimizing synthetic data generation for AI4RE tasks
Comparing synthetic and human-authored requirements performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthline v1 enhances Product Line approach
Multi-sample prompting boosts utility and diversity
PACE optimizes prompts for task-dependent results