Let's Play Across Cultures: A Large Multilingual, Multicultural Benchmark for Assessing Language Models' Understanding of Sports

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing language model evaluations exhibit strong bias toward globally dominant sports, neglecting regional traditional sports cultures. To address this gap, we introduce CultSportQA—the first multimodal benchmark for traditional sports question answering, covering 60 countries across six continents and comprising 33,000 image-text questions spanning historical, rule-based, and situational domains. We propose the first cross-cultural, multilingual, and multimodal evaluation framework, enabling systematic assessment under zero-shot, few-shot, and chain-of-thought prompting paradigms. Empirical evaluation across large, small, and multimodal foundation models reveals significant cultural knowledge gaps and cognitive biases toward non-dominant sports traditions. This work establishes the first standardized, scalable, and open-source benchmark for assessing AI’s cultural inclusivity and multicultural understanding, thereby advancing equitable evaluation methodologies in AI.

Technology Category

Application Category

📝 Abstract
Language Models (LMs) are primarily evaluated on globally popular sports, often overlooking regional and indigenous sporting traditions. To address this gap, we introduce extbf{ extit{CultSportQA}}, a benchmark designed to assess LMs' understanding of traditional sports across 60 countries and 6 continents, encompassing four distinct cultural categories. The dataset features 33,000 multiple-choice questions (MCQs) across text and image modalities, each of which is categorized into three key types: history-based, rule-based, and scenario-based. To evaluate model performance, we employ zero-shot, few-shot, and chain-of-thought (CoT) prompting across a diverse set of Large Language Models (LLMs), Small Language Models (SLMs), and Multimodal Large Language Models (MLMs). By providing a comprehensive multilingual and multicultural sports benchmark, extbf{ extit{CultSportQA}} establishes a new standard for assessing AI's ability to understand and reason about traditional sports.
Problem

Research questions and friction points this paper is trying to address.

Evaluating language models on regional and indigenous sports traditions
Assessing AI understanding of traditional sports across 60 countries
Testing multilingual multicultural sports knowledge through diverse question types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual multicultural sports benchmark for evaluation
33,000 multimodal questions across 60 countries
Zero-shot few-shot chain-of-thought prompting methods
P
Punit Kumar Singh
Indian Institute of Technology Patna, India
N
Nishant Kumar
Indian Institute of Technology Patna, India
A
Akash Ghosh
Indian Institute of Technology Patna, India
K
Kunal Pasad
Sardar Patel Institute of Technology, Mumbai
K
Khushi Soni
Sardar Patel Institute of Technology, Mumbai
M
Manisha Jaishwal
Indian Institute of Technology Patna, India
S
Sriparna Saha
Indian Institute of Technology Patna, India
Syukron Abu Ishaq Alfarozi
Syukron Abu Ishaq Alfarozi
Universitas Gadjah Mada
intelligence systemmachine learningcomputer visionnatural language processing
A
Asres Temam Abagissa
Indian Institute of Technology Patna, India
Kitsuchart Pasupa
Kitsuchart Pasupa
Professor, School of Information Technology, King Mongkut's Institute of Technology Ladkrabang
Machine LearningPattern RecognitionArtificial Intelligence
H
Haiqin Yang
Shenzhen Technology University, China
Jose G Moreno
Jose G Moreno
Associate professor - University of Toulouse - IRIT
Information RetrievalNatural Language ProcessingInformation Extraction