Rethinking the Capability of Fine-Tuned Language Models for Automated Vulnerability Repair

📅 2025-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical generalization deficiency in current fine-tuning-based automated vulnerability repair (AVR) methods: dominant models overfit training data, and conventional token-matching evaluation metrics fail to reflect functional correctness. To address this, we propose three innovations: (1) the first semantics-preserving code transformation technique for test-set generation, ensuring evaluation fidelity; (2) a rigorously disjoint data-splitting benchmark that eliminates data leakage; and (3) L-AVRBench—the first execution-driven AVR benchmark—replacing syntactic matching with test-case pass rate as the primary metric. Empirical results reveal that over 60% of patches deemed “successful” by traditional metrics are functionally incorrect. Our work establishes a more robust, semantics-aware evaluation paradigm for trustworthy AI-powered security repair.

Technology Category

Application Category

📝 Abstract
Learning-based automated vulnerability repair (AVR) techniques that utilize fine-tuned language models have shown promise in generating vulnerability patches. However, questions remain about their ability to repair unseen vulnerabilities. Our empirical study reveals that state-of-the-art models often overfit to the training set and are evaluated using training, validation, and test sets that are not mutually exclusive. Furthermore, relying on match-based metrics that compare generated patches to reference fixes at the token level has some limitations, failing to account for the possibility of various valid ways to patch the vulnerability. In this paper, we examine the capabilities of state-of-the-art fine-tuned AVR models and the adequacy of match-based evaluation metrics in three ways. First, we apply semantic-preserving transformations to test sets in order to determine whether models truly learn robust vulnerability-repair patterns or simply rely on spurious features. Second, we re-split the training, validation, and test sets to be mutually exclusive and evaluate the models on the revised test set to assess their generalization capabilities. Third, we introduce L-AVRBench, a test-based benchmark tailored for learning-based AVR, to overcome the limitations of match-based metrics and examine the AVR models' true repair capabilities.
Problem

Research questions and friction points this paper is trying to address.

Evaluating generalization of fine-tuned models for unseen vulnerabilities
Assessing limitations of match-based metrics for patch evaluation
Introducing a test-based benchmark to measure true repair capability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic-preserving transformations test robust learning
Mutually exclusive dataset splits assess generalization
L-AVRBench benchmark overcomes match-based metric limitations
W
Woorim Han
Seoul National University, Seoul, South Korea
Y
Yeongjun Kwak
UNIST, Ulsan, South Korea
M
Miseon Yu
Seoul National University, Seoul, South Korea
K
Kyeongmin Kim
UNIST, Ulsan, South Korea
Y
Younghan Lee
Sungshin Women’s University, Seoul, South Korea
Hyungon Moon
Hyungon Moon
UNIST
Systems SecurityOperating SystemsComputer Architecture
Yunheung Paek
Yunheung Paek
Seoul National University, Seoul, South Korea