MastermindEval: A Simple But Scalable Reasoning Benchmark

📅 2025-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reasoning benchmarks lag behind the rapid advancement of large language models (LLMs) such as o1 and R1 in complex logical reasoning. To address this gap, we propose MastermindEval—a novel deductive reasoning benchmark inspired by the Mastermind game, supporting both interactive agent-based and static reasoning evaluation paradigms. Its key contribution is the first dual-paradigm evaluation framework grounded in rule-driven symbolic state modeling and multi-step constraint satisfaction analysis. The benchmark features linear scalability, strong interpretability, and high sensitivity to depth of information integration. Experiments reveal that mainstream LLMs exhibit significant performance bottlenecks even on simple multi-statement reasoning tasks, exposing fundamental limitations in cross-statement logical fusion. Moreover, MastermindEval demonstrates consistent scalability with model capability improvements, establishing a reliable, diagnosable standard for rigorous reasoning evaluation.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) have led to remarkable performance across a wide range of language understanding and mathematical tasks. As a result, increasing attention has been given to assessing the true reasoning capabilities of LLMs, driving research into commonsense, numerical, logical, and qualitative reasoning. However, with the rapid progress of reasoning-focused models such as OpenAI's o1 and DeepSeek's R1, there has been a growing demand for reasoning benchmarks that can keep pace with ongoing model developments. In this paper, we introduce MastermindEval, a simple, scalable, and interpretable deductive reasoning benchmark inspired by the board game Mastermind. Our benchmark supports two evaluation paradigms: (1) agentic evaluation, in which the model autonomously plays the game, and (2) deductive reasoning evaluation, in which the model is given a pre-played game state with only one possible valid code to infer. In our experimental results we (1) find that even easy Mastermind instances are difficult for current models and (2) demonstrate that the benchmark is scalable to possibly more advanced models in the future Furthermore, we investigate possible reasons why models cannot deduce the final solution and find that current models are limited in deducing the concealed code as the number of statement to combine information from is increasing.
Problem

Research questions and friction points this paper is trying to address.

Assessing reasoning capabilities of large language models.
Developing scalable reasoning benchmarks for advanced models.
Investigating limitations in deductive reasoning of current models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

MastermindEval: scalable deductive reasoning benchmark
Supports agentic and deductive evaluation paradigms
Identifies model limitations in combining information
🔎 Similar Papers
No similar papers found.