🤖 AI Summary
This study investigates the practical prioritization of key quality attributes—such as fault detection capability, usability, and maintainability—in test cases and test suites, along with associated implementation challenges in industrial software testing. We designed a structured questionnaire grounded in a systematic literature review and deployed it across a large-scale, heterogeneous cohort of software testing practitioners on LinkedIn, yielding 354 valid responses. Mixed-method analysis (qualitative and quantitative) revealed significant contextual variations in attribute prioritization across domains—including agile, embedded, and web development—and identified three pervasive barriers: ambiguous attribute definitions, absence of actionable measurement metrics, and lack of formal review mechanisms. To our knowledge, this is the first empirical study to systematically characterize such cross-domain perceptual differences and practical impediments. The findings provide evidence-based guidance for refining test quality assessment frameworks and prioritizing engineering improvements in real-world testing practice.
📝 Abstract
Context: The quality of the test suites and the constituent test cases significantly impacts confidence in software testing. While research has identified several quality attributes of test cases and test suites, there is a need for a better understanding of their relative importance in practice. Objective: We investigate practitioners' perceptions regarding the relative importance of quality attributes of test cases and test suites and the challenges they face in ensuring the perceived important quality attributes. Method: We conducted an industrial survey using a questionnaire based on the quality attributes identified in an extensive literature review. We used a sampling strategy that leverages LinkedIn to draw a large and heterogeneous sample of professionals with experience in software testing. Results: We collected 354 responses from practitioners with a wide range of experience. We found that the majority of practitioners rated Fault Detection, Usability, Maintainability, Reliability, and Coverage to be the most important quality attributes. Resource Efficiency, Reusability, and Simplicity received the most divergent opinions, which, according to our analysis, depend on the software-testing contexts. We identified common challenges that apply to the important attributes, namely inadequate definition, lack of useful metrics, lack of an established review process, and lack of external support. Conclusion: The findings point out where practitioners actually need further support with respect to achieving high-quality test cases and test suites under different software testing contexts. The findings can serve as a guideline for academic researchers when looking for research directions on the topic. The findings can also be used to encourage companies to provide more support to practitioners to achieve high-quality test cases and test suites.