CodeMind: Evaluating Large Language Models for Code Reasoning

📅 2024-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of large language models’ (LLMs) code reasoning capabilities lack a fine-grained, disentangled assessment framework. Method: We propose CodeMind—a novel evaluation framework that systematically distinguishes three orthogonal reasoning abilities: Independent Execution Reasoning (IER), Specification Reasoning (SR), and Dynamic Semantic Reasoning (DSR)—and introduces a dual-axis evaluation paradigm distinguishing explicit versus implicit reasoning. Contribution/Results: Empirical analysis across four major benchmarks and ten models—spanning diverse scales and training paradigms—reveals: (1) standard general-purpose metrics exhibit no significant correlation with actual code repair performance; (2) models perform reasonably well on low-complexity dynamic logic but suffer sharp degradation on high-complexity control flow, non-trivial arithmetic, composite data types, and API-intensive tasks; and (3) state-of-the-art models demonstrate nascent cross-context generalization. CodeMind establishes a principled, capability-aware benchmark and theoretical foundation for modeling and evaluating code reasoning in LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have been widely used to automate programming tasks. Their capabilities have been evaluated by assessing the quality of generated code through tests or proofs. The extent to which they can reason about code is a critical question revealing important insights about their true capabilities. This paper introduces CodeMind, a framework designed to gauge the code reasoning abilities of LLMs through the following explicit and implicit code reasoning tasks: Independent Execution Reasoning (IER), Specification Reasoning (SR) and Dynamic Semantics Reasoning (DSR). The first evaluates the abilities of LLMs to simulate the execution of given inputs to a code and predict the output (IER). The second assesses the abilities of LLMs to incorporate the simulation of test data in the specification into code generation (SR). Finally, CodeMind evaluates LLMs' abilities to understand overall code semantics only given a specific input/output (DSR). Our extensive evaluation of ten LLMs across four widely used benchmarks using CodeMind shows that LLMs, depending on their size and training strategy, can reason about some dynamic aspects of code. However, their performance drops for code with higher complexity, non-trivial logical and arithmetic operators, non-primitive types, and API calls. We show that these reasoning tasks evaluate LLMs differently, and a comprehensive evaluation of code reasoning requires them all. Finally, we show that the performance of LLMs in bug repair is not correlated with any of the code reasoning tasks, and except for advanced frontier models, other LLMs do not incorporate code reasoning when performing bug repair.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to simulate code execution and predict outputs
Assessing LLMs' capacity to integrate test data into code generation
Measuring LLMs' understanding of code semantics from input/output pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework evaluates LLMs' code execution simulation
Assesses LLMs' test data incorporation in generation
Measures LLMs' dynamic semantics understanding
🔎 Similar Papers
No similar papers found.