🤖 AI Summary
Large language models (LLMs) face fundamental semantic reasoning bottlenecks in code security vulnerability detection, particularly in capturing fine-grained execution semantics such as variable dependencies and control-flow transitions. Systematic evaluation reveals that scaling model size or training data yields diminishing returns—state-of-the-art models achieve only 54.5% balanced accuracy—and exposes inherent limitations of autoregressive pretraining for precise program reasoning. Method: We propose a paradigm shift toward execution-aware pretraining objectives and explicit multi-step reasoning mechanisms. To rigorously assess progress, we introduce a comprehensive evaluation framework integrating prompt engineering, domain-specific knowledge injection, and fine-grained error analysis. Contribution/Results: Empirical analysis demonstrates consistent failure of current LLMs across critical reasoning steps—e.g., inter-procedural dataflow tracking and conditional branch resolution—highlighting the urgent need for foundational modeling innovations beyond scale alone. Our work establishes both diagnostic benchmarks and conceptual foundations for next-generation, execution-grounded code reasoning models.
📝 Abstract
In this paper, we present a challenging code reasoning task: vulnerability detection. Large Language Models (LLMs) have shown promising results in natural-language and math reasoning, but state-of-the-art (SOTA) models reported only 54.5% Balanced Accuracy in our vulnerability detection evaluation, even those models pre-trained on large amounts of source code. Our error analysis on LLM responses shows that the models struggle to reason about the code semantics relevant to identifying vulnerabilities, especially subtle semantic differences caused by small textual changes. We explored prominent models and training settings to understand their effects on vulnerability detection performance -- including better prompts, larger models, more pre-training data, and fine-tuning -- but none led to significant improvements. This raises the question of whether simply scaling training data and model size will allow us to"solve"complex code reasoning tasks like vulnerability detection, or if a fundamental shift in modeling and training techniques is required. We also explored adding domain knowledge to prompts; although it helped certain models understand some code semantics, vulnerability detection requires multi-step reasoning, and these models still failed in steps, such as reasoning about variable relations. Our results suggest that new models, new training methods, or more execution-specific pretraining data may be needed to conquer vulnerability detection. We speculate that auto-regressive pre-training on source code may not effectively extract code semantics, especially on the current pretraining mixtures, in which execution data is scarce. Success on vulnerability detection as a code reasoning task can benefit many areas of software engineering such as debugging, test input generation, and program repair. Our code and data are available at https://doi.org/10.6084/m9.figshare.27368025.