Beyond Accuracy: Characterizing Code Comprehension Capabilities in (Large) Language Models

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing code understanding benchmarks, which offer only coarse-grained evaluations and fail to reveal the specific capabilities and shortcomings of large language models (LLMs). To enable fine-grained diagnosis, the authors reformulate code understanding as an input–output consistency verification task and construct a diagnostic framework that evaluates both classification and generation models at the instance level. They further investigate the relationship between model performance and human-centric software complexity metrics—such as lexical size, control flow complexity, and abstract syntax tree (AST) structure. Experimental results show that traditional complexity metrics exhibit weak correlation with model performance (AUROC 0.63), whereas predictions derived from shadow models achieve substantially higher correlation (AUROC 0.86), suggesting that LLMs follow non-human-centric patterns in code comprehension and highlighting the urgent need for new evaluation paradigms.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly integrated into software engineering workflows, yet current benchmarks provide only coarse performance summaries that obscure the diverse capabilities and limitations of these models. This paper investigates whether LLMs'code-comprehension performance aligns with traditional human-centric software metrics or instead reflects distinct, non-human regularities. We introduce a diagnostic framework that reframes code understanding as a binary input-output consistency task, enabling the evaluation of classification and generative models. Using a large-scale dataset, we correlate model performance with traditional, human-centric complexity metrics, such as lexical size, control-flow complexity, and abstract syntax tree structure. Our analyses reveal minimal correlation between human-defined metrics and LLM success (AUROC 0.63), while shadow models achieve substantially higher predictive performance (AUROC 0.86), capturing complex, partially predictable patterns beyond traditional software measures. These findings suggest that LLM comprehension reflects model-specific regularities only partially accessible through either human-designed or learned features, emphasizing the need for benchmark methodologies that move beyond aggregate accuracy and toward instance-level diagnostics, while acknowledging fundamental limits in predicting correct outcomes.
Problem

Research questions and friction points this paper is trying to address.

code comprehension
large language models
software complexity metrics
model evaluation
benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

code comprehension
large language models
diagnostic framework
shadow models
software complexity metrics
🔎 Similar Papers
2024-02-08International Conference on Machine LearningCitations: 6