Leveraging GPT-4 for Vulnerability-Witnessing Unit Test Generation

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manually writing unit tests for vulnerability witnessing is costly and often ineffective at verifying whether patches truly eliminate vulnerabilities. Method: This paper proposes a vulnerability-witnessing test generation approach leveraging GPT-4, trained on real-world vulnerable and patched code pairs from VUL4J. It incorporates code-context awareness and a self-correction mechanism to produce tests that reliably distinguish between vulnerable and patched program states. Contribution/Results: We introduce the first systematic evaluation framework tailored to vulnerability-witnessing capability, assessing both syntactic/semantic correctness and human-in-the-loop usability. Experiments show 66.5% of generated tests are syntactically correct, and 7.5% achieve automated semantic validity verification. Most test templates require only lightweight editing for practical security testing. This work presents the first comprehensive empirical assessment of large language models in generating vulnerability-witnessing tests, establishing a novel paradigm and foundational evidence for AI-augmented software security testing.

Technology Category

Application Category

📝 Abstract
In the life-cycle of software development, testing plays a crucial role in quality assurance. Proper testing not only increases code coverage and prevents regressions but it can also ensure that any potential vulnerabilities in the software are identified and effectively fixed. However, creating such tests is a complex, resource-consuming manual process. To help developers and security experts, this paper explores the automatic unit test generation capability of one of the most widely used large language models, GPT-4, from the perspective of vulnerabilities. We examine a subset of the VUL4J dataset containing real vulnerabilities and their corresponding fixes to determine whether GPT-4 can generate syntactically and/or semantically correct unit tests based on the code before and after the fixes as evidence of vulnerability mitigation. We focus on the impact of code contexts, the effectiveness of GPT-4's self-correction ability, and the subjective usability of the generated test cases. Our results indicate that GPT-4 can generate syntactically correct test cases 66.5% of the time without domain-specific pre-training. Although the semantic correctness of the fixes could be automatically validated in only 7. 5% of the cases, our subjective evaluation shows that GPT-4 generally produces test templates that can be further developed into fully functional vulnerability-witnessing tests with relatively minimal manual effort. Therefore, despite the limited data, our initial findings suggest that GPT-4 can be effectively used in the generation of vulnerability-witnessing tests. It may not operate entirely autonomously, but it certainly plays a significant role in a partially automated process.
Problem

Research questions and friction points this paper is trying to address.

Automating unit test generation for vulnerability detection using GPT-4
Evaluating GPT-4's ability to create syntactically and semantically correct tests
Assessing usability of generated tests for real-world vulnerability mitigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-4 generates vulnerability-witnessing unit tests
Leverages code context and self-correction ability
Partially automated test generation with minimal effort