🤖 AI Summary
Existing evaluations of large language models’ (LLMs) code reasoning capabilities lack a fine-grained, disentangled assessment framework. Method: We propose CodeMind—a novel evaluation framework that systematically distinguishes three orthogonal reasoning abilities: Independent Execution Reasoning (IER), Specification Reasoning (SR), and Dynamic Semantic Reasoning (DSR)—and introduces a dual-axis evaluation paradigm distinguishing explicit versus implicit reasoning. Contribution/Results: Empirical analysis across four major benchmarks and ten models—spanning diverse scales and training paradigms—reveals: (1) standard general-purpose metrics exhibit no significant correlation with actual code repair performance; (2) models perform reasonably well on low-complexity dynamic logic but suffer sharp degradation on high-complexity control flow, non-trivial arithmetic, composite data types, and API-intensive tasks; and (3) state-of-the-art models demonstrate nascent cross-context generalization. CodeMind establishes a principled, capability-aware benchmark and theoretical foundation for modeling and evaluating code reasoning in LLMs.
📝 Abstract
Large Language Models (LLMs) have been widely used to automate programming tasks. Their capabilities have been evaluated by assessing the quality of generated code through tests or proofs. The extent to which they can reason about code is a critical question revealing important insights about their true capabilities. This paper introduces CodeMind, a framework designed to gauge the code reasoning abilities of LLMs through the following explicit and implicit code reasoning tasks: Independent Execution Reasoning (IER), Specification Reasoning (SR) and Dynamic Semantics Reasoning (DSR). The first evaluates the abilities of LLMs to simulate the execution of given inputs to a code and predict the output (IER). The second assesses the abilities of LLMs to incorporate the simulation of test data in the specification into code generation (SR). Finally, CodeMind evaluates LLMs' abilities to understand overall code semantics only given a specific input/output (DSR). Our extensive evaluation of ten LLMs across four widely used benchmarks using CodeMind shows that LLMs, depending on their size and training strategy, can reason about some dynamic aspects of code. However, their performance drops for code with higher complexity, non-trivial logical and arithmetic operators, non-primitive types, and API calls. We show that these reasoning tasks evaluate LLMs differently, and a comprehensive evaluation of code reasoning requires them all. Finally, we show that the performance of LLMs in bug repair is not correlated with any of the code reasoning tasks, and except for advanced frontier models, other LLMs do not incorporate code reasoning when performing bug repair.