WinoWhat: A Parallel Corpus of Paraphrased WinoGrande Sentences with Common Sense Categorization

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the overestimation of large language models’ (LLMs) commonsense reasoning capabilities, particularly due to the limitations of the Winograd Schema Challenge. We introduce WinoWhat—a parallel corpus derived from the WinoGrande validation set—featuring semantics-preserving paraphrases and fine-grained annotations of five commonsense knowledge categories. WinoWhat is the first benchmark to jointly integrate systematic paraphrasing and commonsense category labeling for evaluation enhancement. Through training-data matching experiments, we empirically refute the “benchmark memorization dominates performance” hypothesis. Results show that all LLMs—across scales—exhibit an average 12.7% performance drop on WinoWhat, revealing substantial over-optimism in existing evaluations. Crucially, memorization effects are negligible (<1.5%), underscoring a fundamental generalization bottleneck. This work establishes a more rigorous, interpretable, and knowledge-aware benchmark for commonsense reasoning assessment.

Technology Category

Application Category

📝 Abstract
In this study, we take a closer look at how Winograd schema challenges can be used to evaluate common sense reasoning in LLMs. Specifically, we evaluate generative models of different sizes on the popular WinoGrande benchmark. We release WinoWhat, a new corpus, in which each instance of the WinoGrande validation set is paraphrased. Additionally, we evaluate the performance on the challenge across five common sense knowledge categories, giving more fine-grained insights on what types of knowledge are more challenging for LLMs. Surprisingly, all models perform significantly worse on WinoWhat, implying that LLM reasoning capabilities are overestimated on WinoGrande. To verify whether this is an effect of benchmark memorization, we match benchmark instances to LLM trainingdata and create two test-suites. We observe that memorization has a minimal effect on model performance on WinoGrande.
Problem

Research questions and friction points this paper is trying to address.

Evaluates common sense reasoning in LLMs using Winograd schema challenges
Assesses generative models on paraphrased WinoGrande benchmark (WinoWhat)
Analyzes performance across five common sense knowledge categories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Paraphrased WinoGrande sentences for evaluation
Common sense categorization for fine-grained analysis
Training data matching to assess memorization effects
🔎 Similar Papers
No similar papers found.
I
Ine Gevers
CLiPS, University of Antwerp
V
Victor De Marez
CLiPS, University of Antwerp
L
Luna De Bruyne
CLiPS, University of Antwerp
Walter Daelemans
Walter Daelemans
Professor of Computational Linguistics, University of Antwerp
Computational LinguisticsNatural Language ProcessingComputational Psycholinguistics