Enhancing Requirement Traceability through Data Augmentation Using Large Language Models

πŸ“… 2025-09-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Automated requirement-to-code traceability suffers from limited training data and semantic gaps between requirements and code, severely constraining model performance. Method: This paper proposes a large language model (LLM)-based data augmentation framework. It introduces four novel prompt templates to generate high-quality trace links in zero-shot and few-shot settings using Gemini 1.5 Pro, Claude 3, GPT-3.5, and GPT-4. Additionally, the framework optimizes the traceability model’s encoder architecture to better accommodate LLM-augmented data. Contribution/Results: Extensive experiments demonstrate substantial improvements in traceability performance: the F1-score increases by up to 28.59% over baseline methods. These results validate the effectiveness and practicality of LLM-driven data augmentation for low-resource software traceability tasks, offering a scalable solution to data scarcity and cross-domain semantic misalignment.

Technology Category

Application Category

πŸ“ Abstract
Requirements traceability is crucial in software engineering to ensure consistency between requirements and code. However, existing automated traceability methods are constrained by the scarcity of training data and challenges in bridging the semantic gap between artifacts. This study aims to address the data scarcity problem in requirements traceability by employing large language models (LLMs) for data augmentation. We propose a novel approach that utilizes prompt-based techniques with LLMs to generate augmented requirement-to-code trace links, thereby enhancing the training dataset. Four LLMs (Gemini 1.5 Pro, Claude 3, GPT-3.5, and GPT-4) were used, employing both zero-shot and few-shot templates. Moreover, we optimized the encoder component of the tracing model to improve its efficiency and adaptability to augmented data. The key contributions of this paper are: (1) proposing and evaluating four prompt templates for data augmentation; (2) providing a comparative analysis of four LLMs for generating trace links; (3) enhancing the model's encoder for improved adaptability to augmented datasets. Experimental results show that our approach significantly enhances model performance, achieving an F1 score improvement of up to 28.59%, thus demonstrating its effectiveness and potential for practical application.
Problem

Research questions and friction points this paper is trying to address.

Addressing data scarcity in requirements traceability using LLMs
Bridging semantic gap between requirements and code artifacts
Improving automated traceability model performance through data augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate augmented requirement-to-code trace links
Prompt-based techniques with zero-shot and few-shot templates
Optimized encoder improves adaptability to augmented data
πŸ”Ž Similar Papers
No similar papers found.
J
Jianzhang Zhang
Department of Management Science and Engineering, Hangzhou Normal University, Hangzhou, Zhejiang, P.R.China
J
Jialong Zhou
Department of Management Science and Engineering, Hangzhou Normal University, Hangzhou, Zhejiang, P.R.China
Nan Niu
Nan Niu
University of North Florida
Software EngineeringRequirements EngineeringMultimedia ComputingHuman-Centered Computing
C
Chuang Liu
Department of Management Science and Engineering, Hangzhou Normal University, Hangzhou, Zhejiang, P.R.China