GreekMMLU: A Native-Sourced Multitask Benchmark for Evaluating Language Models in Greek

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the absence of a comprehensive multitask evaluation benchmark for large language models in authentic Greek-language contexts. To this end, we introduce the first native Greek large-scale multitask language understanding benchmark, comprising 21,805 multiple-choice questions drawn from Greek academic, professional, and governmental examinations across 45 disciplines, stratified by educational difficulty. We propose a novel discipline taxonomy tailored to the Greek context, establish public and private test sets to mitigate data contamination, and conduct a systematic evaluation of over 80 open- and closed-source models using human-collected and annotated data. Our findings reveal substantial performance gaps between state-of-the-art and open-source models, as well as between Greek-adapted and general multilingual models, while offering detailed insights into the impact of model scale, adaptation strategies, and prompting techniques.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are commonly trained on multilingual corpora that include Greek, yet reliable evaluation benchmarks for Greek-particularly those based on authentic, native-sourced content-remain limited. Existing datasets are often machine-translated from English, failing to capture Greek linguistic and cultural characteristics. We introduce GreekMMLU, a native-sourced benchmark for massive multitask language understanding in Greek, comprising 21,805 multiple-choice questions across 45 subject areas, organized under a newly defined subject taxonomy and annotated with educational difficulty levels spanning primary to professional examinations. All questions are sourced or authored in Greek from academic, professional, and governmental exams. We publicly release 16,857 samples and reserve 4,948 samples for a private leaderboard to enable robust and contamination-resistant evaluation. Evaluations of over 80 open- and closed-source LLMs reveal substantial performance gaps between frontier and open-weight models, as well as between Greek-adapted models and general multilingual ones. Finally, we provide a systematic analysis of factors influencing performance-including model scale, adaptation, and prompting-and derive insights for improving LLM capabilities in Greek.
Problem

Research questions and friction points this paper is trying to address.

Greek language
language model evaluation
multitask benchmark
native-sourced data
linguistic and cultural characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

GreekMMLU
native-sourced benchmark
multitask language understanding
contamination-resistant evaluation
language model evaluation
🔎 Similar Papers