🤖 AI Summary
Binary reuse detection relies on function call graph (FCG)-driven binary decomposition, yet existing methods assume FCG structural stability across compiler versions—a flawed assumption, as compiler choices, optimization levels (especially inlining), and target architectures significantly distort FCGs. Method: This work systematically characterizes FCG variability across 17 compilers, 6 optimization levels, and 4 architectures, constructing a large-scale cross-compiler binary dataset; it proposes a decomposition evaluation framework grounded in mapping stability and clustering consistency. Contribution/Results: We identify three robust mapping patterns that persist despite drastic FCG size variations. Empirical evaluation reveals that mainstream decomposition methods suffer concurrent degradation in coverage and community consistency under cross-compiler settings. Our findings provide both theoretical foundations and methodological support for enhancing the robustness of binary reuse detection.
📝 Abstract
Binary decomposition, which decomposes binary files into modules, plays a critical role in binary reuse detection. Existing binary decomposition works either apply anchor-based methods by extending anchor functions to generate modules, or apply clustering-based methods by using clustering algorithms to group binary functions, which all rely on that reused code shares similar function call relationships. However, we find that function call graphs (FCGs) vary a lot when using different compilation settings, especially with diverse function inlining decisions.
In this work, we conduct the first systematic empirical study on the variance of FCGs compiled by various compilation settings and explore its effect on binary decomposition methods. We first construct a dataset compiled by 17 compilers, using 6 optimizations to 4 architectures and analyze the changes and mappings of the FCGs. We find that the size of FCGs changes dramatically, while the FCGs are still linked by three different kinds of mappings. Then we evaluate the existing works under the FCG variance, and results show that existing works are facing great challenges when conducting cross-compiler evaluation with diverse optimization settings. Finally, we propose a method to identify the optimal decomposition and compare the existing decomposition works with the optimal decomposition. Existing works either suffer from low coverage or cannot generate stable community similarities.