Combining Static Code Analysis and Large Language Models Improves Correctness and Performance of Algorithm Recognition

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a hybrid approach that integrates large language models (LLMs) with lightweight static code analysis to automatically identify algorithms in source code while reducing reliance on identifier names. By synergistically combining a multi-stage filtering mechanism with in-context learning prompting strategies, the method significantly curtails LLM invocation frequency—by 72.39% to 97.50%—while improving F1 scores by up to 12 percentage points, achieving 75%–77%. Experimental results demonstrate that the approach maintains high accuracy and efficiency even under identifier obfuscation, substantially reducing runtime and underscoring its practical utility in program comprehension and software maintenance.
📝 Abstract
Context: Since it is well-established that developers spend a substantial portion of their time understanding source code, the ability to automatically identify algorithms within source code presents a valuable opportunity. This capability can support program comprehension, facilitate maintenance, and enhance overall software quality. Objective: We empirically evaluate how combining LLMs with static code analysis can improve the automated recognition of algorithms, while also evaluating their standalone performance and dependence on identifier names. Method: We perform multiple experiments evaluating the combination of LLMs with static analysis using different filter patterns. We compare this combined approach against their standalone performance under various prompting strategies and investigate the impact of systematic identifier obfuscation on classification performance and runtime. Results: The combination of LLMs with lightweight static analysis performs surprisingly well, reducing required LLM calls by 72.39-97.50% depending on the filter pattern. This not only lowers runtime significantly but also improves F1-scores by up to 12 percentage points (pp) compared to the baseline. Regarding the different prompting strategies, in-context learning with two examples provides an effective trade-off between classification performance and runtime efficiency, achieving F1-scores of 75-77% with only a modest increase in inference time. Lastly, we find that LLMs are not solely dependent on name-information as they are still able to identify most algorithm implementations when identifiers are obfuscated. Conclusion: By combining LLMs with static analysis, we achieve substantial reductions in runtime while simultaneously improving F1-scores, underscoring the value of a hybrid approach.
Problem

Research questions and friction points this paper is trying to address.

algorithm recognition
program comprehension
static code analysis
large language models
source code understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

static code analysis
large language models
algorithm recognition
hybrid approach
identifier obfuscation
🔎 Similar Papers
No similar papers found.
D
Denis Neumüller
Ulm University, Germany
S
Sebastian Boll
Ulm University, Germany
D
David Schüler
Ulm University, Germany
Matthias Tichy
Matthias Tichy
Professor, Ulm University, Germany
software engineeringmodel-driven software engineeringsoftware qualityautomotive systemsempirical software engineering