🤖 AI Summary
This paper addresses the evaluation of large language models (LLMs) on semantic-preserving Python code obfuscation, proposing “semantic elasticity” as a novel metric and establishing the first empirical, AI-driven framework for assessing code obfuscation. Methodologically, few-shot prompting is employed to drive GPT-4-Turbo, Gemini-1.5, and Claude-3.5-Sonnet to generate obfuscated variants of 30 cross-domain functions; outputs are rigorously evaluated via functional equivalence checking, abstract syntax tree (AST) analysis, and cyclomatic complexity measurement. Key contributions include: (1) uncovering the counterintuitive “obfuscation-as-simplification” phenomenon—LLM-generated obfuscations consistently reduce cyclomatic complexity; (2) demonstrating GPT-4-Turbo’s superior obfuscation success rate (81%), significantly outperforming Gemini-1.5 (39%) and Claude-3.5-Sonnet (30%); and (3) validating semantic elasticity’s strong correlation with human judgment (Spearman’s ρ = 0.92), confirming its efficacy as an automated evaluation metric.
📝 Abstract
Code obfuscation is the conversion of original source code into a functionally equivalent but less readable form, aiming to prevent reverse engineering and intellectual property theft. This is a challenging task since it is crucial to maintain functional correctness of the code while substantially disguising the input code. The recent development of large language models (LLMs) paves the way for practical applications in different domains, including software engineering. This work performs an empirical study on the ability of LLMs to obfuscate Python source code and introduces a metric (i.e., semantic elasticity) to measure the quality degree of obfuscated code. We experimented with 3 leading LLMs, i.e., Claude-3.5-Sonnet, Gemini-1.5, GPT-4-Turbo across 30 Python functions from diverse computational domains. Our findings reveal GPT-4-Turbo's remarkable effectiveness with few-shot prompting (81% pass rate versus 29% standard prompting), significantly outperforming both Gemini-1.5 (39%) and Claude-3.5-Sonnet (30%). Notably, we discovered a counter-intuitive"obfuscation by simplification"phenomenon where models consistently reduce rather than increase cyclomatic complexity. This study provides a methodological framework for evaluating AI-driven obfuscation while highlighting promising directions for leveraging LLMs in software security.