🤖 AI Summary
Large language models (LLMs) exhibit weak reasoning capabilities and severe prediction bias—such as over-detecting certain vulnerability types while missing others—in software vulnerability detection.
Method: This paper proposes Groupwise Relative Policy Optimization (GRPO), a novel training framework that reformulates the GRPO advantage function and introduces a rule-injected reward mechanism. It leverages structured security rules to guide fine-grained policy optimization and jointly performs supervised fine-tuning and reinforcement learning using multi-source labeled datasets—including BigVul, DiverseVul, and CleanVul.
Contribution/Results: Experiments demonstrate significant improvements in detection accuracy and cross-vulnerability-type generalization across multiple benchmarks. The approach also enhances inference consistency, validating the effectiveness and scalability of rule-guided reinforcement learning for code security analysis.
📝 Abstract
Improving and understanding the training dynamics and reasoning of Large Language Models (LLMs) has become essential for their deployment in AI-based security tools, such as software vulnerability detection. In this work, we present an extensive study aimed at advancing recent RL-based finetuning techniques for LLMs in the context of vulnerability detection.
We start by highlighting key limitations of commonly adopted LLMs, such as their tendency to over-predict certain types of vulnerabilities while failing to detect others. To address this challenge, we explore the use of Group Relative Policy Optimization (GRPO), a recent policy-gradient method, for guiding LLM behavior through structured, rule-based rewards. We enable its application to the vulnerability detection task by redefining its advantage functions and reward signals using annotations from widely used datasets in the field, including BigVul, DiverseVul, and CleanVul.
The proposed methodology enables an extensive set of experiments, addressing multiple research questions regarding the impact of GRPO on generalization, reasoning capabilities, and performance improvements over standard supervised finetuning (SFT). Our findings offer valuable insights into the potential of RL-based training to enhance both the performance and reasoning abilities of LLMs in the context of software vulnerability detection.