Polishing Every Facet of the GEM: Testing Linguistic Competence of LLMs and Humans in Korean

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the limitations of large language models (LLMs) in mastering deep linguistic competencies in Korean—particularly phonological and pragmatic abilities that rely on embodied, real-world experiential knowledge. Method: We introduce KoGEM, the first cognitively grounded Korean grammatical evaluation benchmark, comprising 1,500 multiple-choice items spanning five major categories and sixteen fine-grained subcategories; it features human annotation and a theoretically motivated, capability-specific taxonomy. Contribution/Results: Comprehensive zero-shot evaluation across 27 LLMs reveals strong performance on definitional knowledge tasks but substantial deficits—relative to human baselines—in experience-dependent tasks such as phonological rule application and speech perception. This work provides the first empirical evidence of fundamental phonological comprehension gaps in LLMs for Korean and proposes integrating embodied experiential knowledge as a novel direction for advancing linguistic understanding in foundation models.

Technology Category

Application Category

📝 Abstract
We introduce the $underline{Ko}rean underline{G}rammar underline{E}valuation Benchunderline{M}ark (KoGEM)$, designed to assess the linguistic competence of LLMs and humans in Korean. KoGEM consists of 1.5k multiple-choice QA pairs covering five main categories and 16 subcategories. The zero-shot evaluation of 27 LLMs of various sizes and types reveals that while LLMs perform remarkably well on straightforward tasks requiring primarily definitional knowledge, they struggle with tasks that demand the integration of real-world experiential knowledge, such as phonological rules and pronunciation. Furthermore, our in-depth analysis suggests that incorporating such experiential knowledge could enhance the linguistic competence of LLMs. With KoGEM, we not only highlight the limitations of current LLMs in linguistic competence but also uncover hidden facets of LLMs in linguistic competence, paving the way for enhancing comprehensive language understanding. Our code and dataset are available at: https://github.com/SungHo3268/KoGEM.
Problem

Research questions and friction points this paper is trying to address.

Assessing linguistic competence of LLMs and humans in Korean
Evaluating LLMs' performance on Korean grammar and pronunciation tasks
Identifying limitations and improvement areas for LLMs in language understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Korean Grammar Evaluation Benchmark (KoGEM) introduced
Zero-shot evaluation of 27 diverse LLMs conducted
Incorporating experiential knowledge enhances LLM competence
🔎 Similar Papers
No similar papers found.
SungHo Kim
SungHo Kim
Korea University Graduate
AINLP
N
Nayeon Kim
Department of Computer Science and Engineering, Korea University, Seoul, South Korea
T
Taehee Jeon
Institute for Digital HUSS, Korea University, Seoul, South Korea
SangKeun Lee
SangKeun Lee
Professor of Dept. of Artificial Intelligence, Korea University
Data IntelligenceDeep Learning for Natural Language ProcessingArtificial IntelligenceMachine