Requirements-Driven Automated Software Testing: A Systematic Review

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated software testing suffers from insufficient alignment between test generation and requirements. This paper presents the first systematic literature review (SLR) on Requirement-Driven Automated Software Testing (REDAST), synthesizing 156 studies retrieved from ACM Digital Library, IEEE Xplore, and other major repositories. Applying thematic coding and cross-dimensional analysis, we construct the first comprehensive REDAST framework, clarifying requirements representation formats, abstraction-level mapping patterns, and critical gaps in evaluation methodologies. We categorize requirements formalisms—including natural language, UML, and SysML—and map them to underlying technical approaches such as model checking, NLP-based test generation, and constraint solving. Seven recurring bottlenecks are identified (e.g., poor scalability, high sensitivity to input quality), and four industrially viable evolutionary directions are proposed. The study establishes a structured benchmark and roadmap to advance both the theoretical foundations and practical adoption of REDAST.

Technology Category

Application Category

📝 Abstract
Automated software testing has the potential to enhance efficiency and reliability in software development, yet its adoption remains hindered by challenges in aligning test generation with software requirements. REquirements-Driven Automated Software Testing (REDAST) aims to bridge this gap by leveraging requirements as the foundation for automated test artifact generation. This systematic literature review (SLR) explores the landscape of REDAST by analyzing requirements input, transformation techniques, test outcomes, evaluation methods, and existing limitations. We conducted a comprehensive review of 156 papers selected from six major research databases. Our findings reveal the predominant types, formats, and notations used for requirements in REDAST, the automation techniques employed for generating test artifacts from requirements, and the abstraction levels of resulting test cases. Furthermore, we evaluate the effectiveness of various testing frameworks and identify key challenges such as scalability, automation gaps, and dependency on input quality. This study synthesizes the current state of REDAST research, highlights trends, and proposes future directions, serving as a reference for researchers and practitioners aiming to advance automated software testing.
Problem

Research questions and friction points this paper is trying to address.

Aligning test generation with software requirements
Automating test artifact generation from requirements
Evaluating effectiveness of testing frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated test artifact generation from requirements
Systematic literature review of REDAST techniques
Evaluation of testing frameworks effectiveness and challenges
🔎 Similar Papers
No similar papers found.
Fanyu Wang
Fanyu Wang
Monash University
Requirements EngineeringApplied NLP
C
Chetan Arora
Faculty of Information Technology, Monash University, Australia
C
Chakkrit Tantithamthavorn
Faculty of Information Technology, Monash University, Australia
K
Kaicheng Huang
Faculty of Information Technology, Monash University, Australia
Aldeida Aleti
Aldeida Aleti
Prof, Faculty of Information Technology, Monash University
Software EngineeringArtificial Intelligence