Recitation over Reasoning: How Cutting-Edge Language Models Can Fail on Elementary School-Level Reasoning Problems?

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether state-of-the-art large language models (LLMs) possess genuine reasoning capabilities on elementary-level reasoning tasks or merely rely on pattern memorization from training data. Method: We introduce RoR-Bench, a multimodal benchmark, and propose a novel “conditional perturbation + performance cliff” paradigm: systematically applying controlled, minimal perturbations—such as numeric changes, logical relation inversions, or syntactic rephrasings—to expose failures in compositional generalization to unseen condition combinations. We integrate multimodal prompting, cross-model consistency analysis, and formal task modeling. Contribution/Results: Experiments reveal up to 60% accuracy drops under minor perturbations for models including OpenAI-o1 and DeepSeek-R1, strongly indicating recitation-dominated behavior. This work provides the first systematic empirical validation that current LLMs lack structured generalization in foundational reasoning, establishing a reproducible, quantitative benchmark and methodology for assessing authentic reasoning.

Technology Category

Application Category

📝 Abstract
The rapid escalation from elementary school-level to frontier problems of the difficulty for LLM benchmarks in recent years have weaved a miracle for researchers that we are only inches away from surpassing human intelligence. However, is the LLMs' remarkable reasoning ability indeed comes from true intelligence by human standards, or are they simply reciting solutions witnessed during training at an Internet level? To study this problem, we propose RoR-Bench, a novel, multi-modal benchmark for detecting LLM's recitation behavior when asked simple reasoning problems but with conditions subtly shifted, and conduct empirical analysis on our benchmark. Surprisingly, we found existing cutting-edge LLMs unanimously exhibits extremely severe recitation behavior; by changing one phrase in the condition, top models such as OpenAI-o1 and DeepSeek-R1 can suffer $60%$ performance loss on elementary school-level arithmetic and reasoning problems. Such findings are a wake-up call to the LLM community that compels us to re-evaluate the true intelligence level of cutting-edge LLMs.
Problem

Research questions and friction points this paper is trying to address.

Detects LLM recitation behavior in simple reasoning tasks
Evaluates performance loss under subtly shifted conditions
Challenges true intelligence level of cutting-edge LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed RoR-Bench for recitation detection
Multi-modal benchmark with subtly shifted conditions
Empirical analysis reveals severe recitation behavior
🔎 Similar Papers
No similar papers found.
K
Kai Yan
ByteDance Seed, University of Illinois Urbana-Champaign
Y
Yufei Xu
ByteDance Seed
Zhengyin Du
Zhengyin Du
ByteDance Seed
Large Language ModelMulti-modal Learning
Xuesong Yao
Xuesong Yao
Master of Mechanics, Peking University
Machine LearningLarge language model
Z
Zheyu Wang
ByteDance Seed
X
Xiaowen Guo
ByteDance Seed
Jiecao Chen
Jiecao Chen
Bytedance Seed
LLMreasoningagenttool usememory