π€ AI Summary
Automated requirement-to-code traceability suffers from limited training data and semantic gaps between requirements and code, severely constraining model performance. Method: This paper proposes a large language model (LLM)-based data augmentation framework. It introduces four novel prompt templates to generate high-quality trace links in zero-shot and few-shot settings using Gemini 1.5 Pro, Claude 3, GPT-3.5, and GPT-4. Additionally, the framework optimizes the traceability modelβs encoder architecture to better accommodate LLM-augmented data. Contribution/Results: Extensive experiments demonstrate substantial improvements in traceability performance: the F1-score increases by up to 28.59% over baseline methods. These results validate the effectiveness and practicality of LLM-driven data augmentation for low-resource software traceability tasks, offering a scalable solution to data scarcity and cross-domain semantic misalignment.
π Abstract
Requirements traceability is crucial in software engineering to ensure consistency between requirements and code. However, existing automated traceability methods are constrained by the scarcity of training data and challenges in bridging the semantic gap between artifacts. This study aims to address the data scarcity problem in requirements traceability by employing large language models (LLMs) for data augmentation. We propose a novel approach that utilizes prompt-based techniques with LLMs to generate augmented requirement-to-code trace links, thereby enhancing the training dataset. Four LLMs (Gemini 1.5 Pro, Claude 3, GPT-3.5, and GPT-4) were used, employing both zero-shot and few-shot templates. Moreover, we optimized the encoder component of the tracing model to improve its efficiency and adaptability to augmented data. The key contributions of this paper are: (1) proposing and evaluating four prompt templates for data augmentation; (2) providing a comparative analysis of four LLMs for generating trace links; (3) enhancing the model's encoder for improved adaptability to augmented datasets. Experimental results show that our approach significantly enhances model performance, achieving an F1 score improvement of up to 28.59%, thus demonstrating its effectiveness and potential for practical application.