🤖 AI Summary
Existing black-box context-free grammar (CFG) inference tools—such as Arvada, TreeVada, and Kedavra—exhibit poor scalability, low grammatical readability, and insufficient accuracy when modeling large-scale, complex languages. This paper proposes a novel black-box CFG inference method tailored for program analysis and reverse engineering. Our approach introduces three key innovations: (1) a bracket-guided bubble exploration mechanism to accelerate structural discovery; (2) an LLM-driven bubble generation and nonterminal semantic labeling framework to improve grammatical naturalness and interpretability; and (3) hierarchical incremental delta debugging for precise grammar simplification and validation. Experimental evaluation across multi-language benchmarks shows that our method achieves an F1 score of 0.57—averaging a 25-percentage-point improvement over the best baseline—while producing more compact and semantically intuitive grammars.
📝 Abstract
Black-box context-free grammar inference is crucial for program analysis, reverse engineering, and security, yet existing tools such as Arvada, TreeVada, and Kedavra struggle with scalability, readability, and accuracy on large, complex languages. We present NatGI, a novel LLM-guided grammar inference framework that extends TreeVada's parse tree recovery with three key innovations: bracket-guided bubble exploration, LLM- driven bubble generation and non-terminal labeling, and hierarchical delta debugging (HDD) for systematic tree simplification. Bracket-guided exploration leverages syntactic cues such as parentheses to propose well- structured grammar fragments, while LLM guidance produces meaningful non-terminal names and selects more promising merges. Finally, HDD incrementally reduces unnecessary rules, which makes the grammars both compact and interpretable. In our experiments, we evaluate NatGI on a comprehensive benchmark suite ranging from small languages to larger ones such as lua, c, and mysql. Our results show that NatGI consistently outperforms strong baselines in terms of F1 score. On average, NatGI achieves an F1 score of 0.57, which is 25pp (percentage points) higher than the best-performing baseline, TreeVada. In the case of interpretability, our generated grammars perform significantly better than those produced by existing approaches. Leveraging LLM-based node renaming and bubble exploration, NatGI produces rules with meaningful non-terminal names and compact structures that align more closely with human intuition. As a result, developers and researchers can achieve higher accuracy while still being able to easily inspect, verify, and reason about the structure and semantics of the induced grammars.