MAKIEval: A Multilingual Automatic WiKidata-based Framework for Cultural Awareness Evaluation for LLMs

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) suffer from English-centric pretraining, leading to cross-lingual cultural unawareness and biased outputs; however, systematic evaluation remains challenging due to the scarcity of multilingual benchmarks and unstable translation quality. To address this, we propose the first Wikidata-based automated multilingual cultural awareness evaluation framework, covering 13 languages, 19 countries/regions, and six cultural themes—requiring no manual annotation or translation. Innovatively leveraging Wikidata’s multilingual structured knowledge as cross-lingual anchors, our framework enables automatic identification and alignment of cultural entities. We introduce four orthogonal evaluation dimensions: granularity, diversity, cultural specificity, and cross-lingual consensus. Empirical evaluation across seven mainstream LLMs reveals that English prompting significantly enhances cultural knowledge activation. All code and data are publicly released.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are used globally across many languages, but their English-centric pretraining raises concerns about cross-lingual disparities for cultural awareness, often resulting in biased outputs. However, comprehensive multilingual evaluation remains challenging due to limited benchmarks and questionable translation quality. To better assess these disparities, we introduce MAKIEval, an automatic multilingual framework for evaluating cultural awareness in LLMs across languages, regions, and topics. MAKIEval evaluates open-ended text generation, capturing how models express culturally grounded knowledge in natural language. Leveraging Wikidata's multilingual structure as a cross-lingual anchor, it automatically identifies cultural entities in model outputs and links them to structured knowledge, enabling scalable, language-agnostic evaluation without manual annotation or translation. We then introduce four metrics that capture complementary dimensions of cultural awareness: granularity, diversity, cultural specificity, and consensus across languages. We assess 7 LLMs developed from different parts of the world, encompassing both open-source and proprietary systems, across 13 languages, 19 countries and regions, and 6 culturally salient topics (e.g., food, clothing). Notably, we find that models tend to exhibit stronger cultural awareness in English, suggesting that English prompts more effectively activate culturally grounded knowledge. We publicly release our code and data.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multilingual cultural awareness disparities in LLMs
Addressing lack of benchmarks for cross-lingual cultural evaluation
Measuring cultural specificity in model outputs without manual annotation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual Wikidata-based cultural evaluation framework
Automatic entity linking for language-agnostic assessment
Four metrics measuring cultural awareness dimensions
🔎 Similar Papers
No similar papers found.