🤖 AI Summary
Existing language model evaluations exhibit strong bias toward globally dominant sports, neglecting regional traditional sports cultures. To address this gap, we introduce CultSportQA—the first multimodal benchmark for traditional sports question answering, covering 60 countries across six continents and comprising 33,000 image-text questions spanning historical, rule-based, and situational domains. We propose the first cross-cultural, multilingual, and multimodal evaluation framework, enabling systematic assessment under zero-shot, few-shot, and chain-of-thought prompting paradigms. Empirical evaluation across large, small, and multimodal foundation models reveals significant cultural knowledge gaps and cognitive biases toward non-dominant sports traditions. This work establishes the first standardized, scalable, and open-source benchmark for assessing AI’s cultural inclusivity and multicultural understanding, thereby advancing equitable evaluation methodologies in AI.
📝 Abstract
Language Models (LMs) are primarily evaluated on globally popular sports, often overlooking regional and indigenous sporting traditions. To address this gap, we introduce extbf{ extit{CultSportQA}}, a benchmark designed to assess LMs' understanding of traditional sports across 60 countries and 6 continents, encompassing four distinct cultural categories. The dataset features 33,000 multiple-choice questions (MCQs) across text and image modalities, each of which is categorized into three key types: history-based, rule-based, and scenario-based. To evaluate model performance, we employ zero-shot, few-shot, and chain-of-thought (CoT) prompting across a diverse set of Large Language Models (LLMs), Small Language Models (SLMs), and Multimodal Large Language Models (MLMs). By providing a comprehensive multilingual and multicultural sports benchmark, extbf{ extit{CultSportQA}} establishes a new standard for assessing AI's ability to understand and reason about traditional sports.