On the Lack of Robustness of Binary Function Similarity Systems

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Binary function similarity models exhibit insufficient robustness in security-critical scenarios, particularly under adversarial attacks. Method: This work presents the first systematic robustness evaluation of mainstream learning-based models and proposes a lightweight, transferable black-box greedy attack framework targeting joint perturbations of control-flow graph (CFG) topology and node attributes. Contribution/Results: Empirical analysis reveals that high accuracy does not imply high robustness, exposing an intrinsic trade-off between performance and robustness. Our attack achieves success rates of 57.06% (targeted) and 95.81% (untargeted), comprehensively breaking state-of-the-art models and demonstrating their pervasive fragility. This study establishes a new benchmark and methodology for security evaluation and robustness enhancement of binary analysis models.

Technology Category

Application Category

📝 Abstract
Binary function similarity, which often relies on learning-based algorithms to identify what functions in a pool are most similar to a given query function, is a sought-after topic in different communities, including machine learning, software engineering, and security. Its importance stems from the impact it has in facilitating several crucial tasks, from reverse engineering and malware analysis to automated vulnerability detection. Whereas recent work cast light around performance on this long-studied problem, the research landscape remains largely lackluster in understanding the resiliency of the state-of-the-art machine learning models against adversarial attacks. As security requires to reason about adversaries, in this work we assess the robustness of such models through a simple yet effective black-box greedy attack, which modifies the topology and the content of the control flow of the attacked functions. We demonstrate that this attack is successful in compromising all the models, achieving average attack success rates of 57.06% and 95.81% depending on the problem settings (targeted and untargeted attacks). Our findings are insightful: top performance on clean data does not necessarily relate to top robustness properties, which explicitly highlights performance-robustness trade-offs one should consider when deploying such models, calling for further research.
Problem

Research questions and friction points this paper is trying to address.

Assessing robustness of binary function similarity models against adversarial attacks
Evaluating performance-robustness trade-offs in machine learning models
Investigating resiliency of state-of-the-art models in security tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box greedy attack on function similarity
Modifies control flow topology and content
Highlights performance-robustness trade-offs
🔎 Similar Papers
No similar papers found.
G
Gianluca Capozzi
Sapienza University of Rome
T
Tong Tang
Zhejiang University
Jie Wan
Jie Wan
Zhejiang University
Z
Ziqi Yang
Zhejiang University
D
Daniele Cono D'Elia
Sapienza University of Rome
G
G. A. D. Luna
Sapienza University of Rome
Lorenzo Cavallaro
Lorenzo Cavallaro
University College London
Systems SecurityAdversarial Machine LearningAI SecurityTrustworthy Machine Learning
Leonardo Querzoni
Leonardo Querzoni
Professor, Sapienza University of Rome
Cyber securityStream processingParallel and distributed systems