🤖 AI Summary
Existing learning-based binary patch detection methods fail for closed-source software binaries due to the unavailability of source code. Method: This paper presents the first systematic investigation into the applicability of code large language models (Code LLMs) for binary security patch detection. We construct the first large-scale bimodal dataset (19,448 samples) comprising both assembly and decompiled pseudocode representations; propose a domain-informed instruction-tuning strategy to inject binary security knowledge; and conduct comprehensive multi-dimensional evaluation across 19 model scales. Contribution/Results: Zero-shot prompting yields limited performance; after fine-tuning, pseudocode representations consistently outperform assembly, achieving substantial accuracy gains in the best-performing model. This work bridges a critical research gap in applying Code LLMs to binary patch detection, empirically establishes pseudocode as a superior intermediate representation, and introduces a novel paradigm for security analysis of closed-source software.
📝 Abstract
Security patch detection (SPD) is crucial for maintaining software security, as unpatched vulnerabilities can lead to severe security risks. In recent years, numerous learning-based SPD approaches have demonstrated promising results on source code. However, these approaches typically cannot be applied to closed-source applications and proprietary systems that constitute a significant portion of real-world software, as they release patches only with binary files, and the source code is inaccessible. Given the impressive performance of code large language models (LLMs) in code intelligence and binary analysis tasks such as decompilation and compilation optimization, their potential for detecting binary security patches remains unexplored, exposing a significant research gap between their demonstrated low-level code understanding capabilities and this critical security task. To address this gap, we construct a large-scale binary patch dataset containing extbf{19,448} samples, with two levels of representation: assembly code and pseudo-code, and systematically evaluate extbf{19} code LLMs of varying scales to investigate their capability in binary SPD tasks. Our initial exploration demonstrates that directly prompting vanilla code LLMs struggles to accurately identify security patches from binary patches, and even state-of-the-art prompting techniques fail to mitigate the lack of domain knowledge in binary SPD within vanilla models. Drawing on the initial findings, we further investigate the fine-tuning strategy for injecting binary SPD domain knowledge into code LLMs through two levels of representation. Experimental results demonstrate that fine-tuned LLMs achieve outstanding performance, with the best results obtained on the pseudo-code representation.