Improving LLM Reasoning for Vulnerability Detection via Group Relative Policy Optimization

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit weak reasoning capabilities and severe prediction bias—such as over-detecting certain vulnerability types while missing others—in software vulnerability detection. Method: This paper proposes Groupwise Relative Policy Optimization (GRPO), a novel training framework that reformulates the GRPO advantage function and introduces a rule-injected reward mechanism. It leverages structured security rules to guide fine-grained policy optimization and jointly performs supervised fine-tuning and reinforcement learning using multi-source labeled datasets—including BigVul, DiverseVul, and CleanVul. Contribution/Results: Experiments demonstrate significant improvements in detection accuracy and cross-vulnerability-type generalization across multiple benchmarks. The approach also enhances inference consistency, validating the effectiveness and scalability of rule-guided reinforcement learning for code security analysis.

Technology Category

Application Category

📝 Abstract
Improving and understanding the training dynamics and reasoning of Large Language Models (LLMs) has become essential for their deployment in AI-based security tools, such as software vulnerability detection. In this work, we present an extensive study aimed at advancing recent RL-based finetuning techniques for LLMs in the context of vulnerability detection. We start by highlighting key limitations of commonly adopted LLMs, such as their tendency to over-predict certain types of vulnerabilities while failing to detect others. To address this challenge, we explore the use of Group Relative Policy Optimization (GRPO), a recent policy-gradient method, for guiding LLM behavior through structured, rule-based rewards. We enable its application to the vulnerability detection task by redefining its advantage functions and reward signals using annotations from widely used datasets in the field, including BigVul, DiverseVul, and CleanVul. The proposed methodology enables an extensive set of experiments, addressing multiple research questions regarding the impact of GRPO on generalization, reasoning capabilities, and performance improvements over standard supervised finetuning (SFT). Our findings offer valuable insights into the potential of RL-based training to enhance both the performance and reasoning abilities of LLMs in the context of software vulnerability detection.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM reasoning for vulnerability detection accuracy
Addressing LLM over-prediction and detection gaps in vulnerabilities
Optimizing RL-based training to improve LLM generalization and performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Group Relative Policy Optimization (GRPO)
Redefines advantage functions with dataset annotations
Enhances LLM reasoning via structured rule-based rewards
🔎 Similar Papers
No similar papers found.
M
Marco Simoni
Institute of Informatics and Telematics, National Research Council of Italy, Via G. Moruzzi 1, Pisa, 56124, Italy
A
Aleksandar Fontana
Institute of Informatics and Telematics, National Research Council of Italy, Via G. Moruzzi 1, Pisa, 56124, Italy; Department of Excellence in Robotics and AI, TeCIP, Scuola Superiore Sant’Anna, Piazza Martiri della Libertà 33, 56127, Pisa, Italy
Giulio Rossolini
Giulio Rossolini
Scuola Superiore Sant'Anna
Trustworthy AISafe and Secure AIComputer VisionLLMs
Andrea Saracino
Andrea Saracino
Associate Professor at Scuola Superiore Sant'Anna
Mobile SecurityNetwork SecurityDistributed SystemsTrust