Tracing LLM Reasoning Processes with Strategic Games: A Framework for Planning, Revision, and Resource-Constrained Decision Making

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the overreliance on final outputs in LLM evaluation while neglecting intermediate reasoning processes, proposing the first dynamic reasoning assessment framework tailored to strategic games. Methodologically, it systematically defines and quantifies four intermediate-state metrics—over-correction risk rate, correction success rate, improvement slope, and budget overrun ratio—and integrates adversarial testing protocols with cross-model behavioral statistical analysis across 4,320 test rounds to enable fine-grained tracking of planning, revision, and resource-constrained decision-making capabilities. Key contributions include: (1) breaking from static outcome-based evaluation paradigms; (2) empirically revealing a significant negative correlation between correction frequency and effectiveness (r = −0.51); and (3) demonstrating that ChatGPT-o3-mini achieves superior overall performance (74.7% win rate, 78.6% correction success rate, and 0.041 improvement slope).

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used for tasks that require complex reasoning. Most benchmarks focus on final outcomes but overlook the intermediate reasoning steps - such as planning, revision, and decision making under resource constraints. We argue that measuring these internal processes is essential for understanding model behavior and improving reliability. We propose using strategic games as a natural evaluation environment: closed, rule-based systems with clear states, limited resources, and automatic feedback. We introduce a framework that evaluates LLMs along three core dimensions: planning, revision, and resource-constrained decision making. To operationalize this, we define metrics beyond win rate, including overcorrection risk rate, correction success rate, improvement slope, and over-budget ratio. In 4320 adversarial rounds across 12 leading models, ChatGPT-o3-mini achieves the top composite score, with a win rate of 74.7 percent, a correction success rate of 78.6 percent, and an improvement slope of 0.041. By contrast, Qwen-Plus, despite an overcorrection risk rate of 81.6 percent, wins only 25.6 percent of its matches - primarily due to excessive resource use. We also observe a negative correlation between overcorrection risk rate and correction success rate (Pearson r = -0.51, p = 0.093), suggesting that more frequent edits do not always improve outcomes. Our findings highlight the value of assessing not only what LLMs decide but how they arrive at those decisions
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' intermediate reasoning steps like planning and revision
Assessing decision-making under resource constraints in strategic games
Measuring model behavior beyond win rates for reliability improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Strategic games evaluate LLM reasoning processes
Metrics include correction success and over-budget ratios
Framework assesses planning, revision, resource decisions
🔎 Similar Papers
No similar papers found.