🤖 AI Summary
This work exposes a critical generalization deficiency in current fine-tuning-based automated vulnerability repair (AVR) methods: dominant models overfit training data, and conventional token-matching evaluation metrics fail to reflect functional correctness. To address this, we propose three innovations: (1) the first semantics-preserving code transformation technique for test-set generation, ensuring evaluation fidelity; (2) a rigorously disjoint data-splitting benchmark that eliminates data leakage; and (3) L-AVRBench—the first execution-driven AVR benchmark—replacing syntactic matching with test-case pass rate as the primary metric. Empirical results reveal that over 60% of patches deemed “successful” by traditional metrics are functionally incorrect. Our work establishes a more robust, semantics-aware evaluation paradigm for trustworthy AI-powered security repair.
📝 Abstract
Learning-based automated vulnerability repair (AVR) techniques that utilize fine-tuned language models have shown promise in generating vulnerability patches. However, questions remain about their ability to repair unseen vulnerabilities. Our empirical study reveals that state-of-the-art models often overfit to the training set and are evaluated using training, validation, and test sets that are not mutually exclusive. Furthermore, relying on match-based metrics that compare generated patches to reference fixes at the token level has some limitations, failing to account for the possibility of various valid ways to patch the vulnerability. In this paper, we examine the capabilities of state-of-the-art fine-tuned AVR models and the adequacy of match-based evaluation metrics in three ways. First, we apply semantic-preserving transformations to test sets in order to determine whether models truly learn robust vulnerability-repair patterns or simply rely on spurious features. Second, we re-split the training, validation, and test sets to be mutually exclusive and evaluate the models on the revised test set to assess their generalization capabilities. Third, we introduce L-AVRBench, a test-based benchmark tailored for learning-based AVR, to overcome the limitations of match-based metrics and examine the AVR models' true repair capabilities.