🤖 AI Summary
Prior evaluations of large language models (LLMs) for vulnerability repair predominantly rely on publicly disclosed vulnerabilities, neglecting systematic assessment on synthetically constructed vulnerabilities—a critical gap in understanding model generalization and robustness. Method: This work introduces the first unified benchmark to systematically evaluate the single-shot automated patching capability of mainstream LLMs—including GPT, LLaMA, DeepSeek, and Mistral—on both real-world and synthetically generated vulnerabilities. Patch correctness is rigorously validated via automated proof-of-vulnerability (PoV) testing. Contribution/Results: Experiments reveal that LLMs achieve significantly higher repair success rates on real vulnerabilities than on synthetic ones; moreover, models exhibit strong complementarity—not mere performance ranking—across vulnerability types and coverage dimensions. Based on these findings, we propose a task-characteristic-driven model selection strategy, providing empirical grounding and methodological guidance for building multi-model collaborative vulnerability repair systems.
📝 Abstract
Automated vulnerability patching is crucial for software security, and recent advancements in Large Language Models (LLMs) present promising capabilities for automating this task. However, existing research has primarily assessed LLMs using publicly disclosed vulnerabilities, leaving their effectiveness on related artificial vulnerabilities largely unexplored. In this study, we empirically evaluate the patching effectiveness and complementarity of several prominent LLMs, such as OpenAI's GPT variants, LLaMA, DeepSeek, and Mistral models, using both real and artificial vulnerabilities. Our evaluation employs Proof-of-Vulnerability (PoV) test execution to concretely assess whether LLM-generated source code successfully patches vulnerabilities. Our results reveal that LLMs patch real vulnerabilities more effectively compared to artificial ones. Additionally, our analysis reveals significant variability across LLMs in terms of overlapping (multiple LLMs patching the same vulnerabilities) and complementarity (vulnerabilities patched exclusively by a single LLM), emphasizing the importance of selecting appropriate LLMs for effective vulnerability patching.