ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual Knowledge Transfer

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing research lacks reliable methods to assess cross-lingual knowledge transfer in multilingual large language models (LLMs). This paper introduces ECLeKTic—the first closed-book question answering (CBQA) benchmark specifically designed to evaluate cross-lingual knowledge transfer in multilingual LLMs. ECLeKTic constructs controlled transfer tasks leveraging the uneven coverage of Wikipedia articles across 12 languages. Crucially, it employs Wikipedia article existence as an unsupervised, black-box proxy for transfer capability, eliminating reliance on manual annotation and mitigating language-alignment biases. Extensive evaluation across eight state-of-the-art multilingual LLMs reveals that while models achieve high accuracy on source-language QA, their average cross-lingual transfer accuracy drops by over 40%, exposing a fundamental bottleneck in current SOTA models’ ability to reuse knowledge across languages.

Technology Category

Application Category

📝 Abstract
To achieve equitable performance across languages, multilingual large language models (LLMs) must be able to abstract knowledge beyond the language in which it was acquired. However, the current literature lacks reliable ways to measure LLMs' capability of cross-lingual knowledge transfer. To that end, we present ECLeKTic, a multilingual closed-book QA (CBQA) dataset that Evaluates Cross-Lingual Knowledge Transfer in a simple, black-box manner. We detected information with uneven coverage across languages by controlling for presence and absence of Wikipedia articles in 12 languages. We generated knowledge-seeking questions in a source language, for which the answer appears in a relevant Wikipedia article and translated them to all other 11 languages, for which the respective Wikipedias lack equivalent articles. Assuming that Wikipedia reflects the prominent knowledge in the LLM's training data, to solve ECLeKTic's CBQA task the model is required to transfer knowledge between languages. Experimenting with 8 LLMs, we show that SOTA models struggle to effectively share knowledge across, languages even if they can predict the answer well for queries in the same language the knowledge was acquired in.
Problem

Research questions and friction points this paper is trying to address.

Evaluates cross-lingual knowledge transfer in LLMs
Measures LLMs' ability to abstract knowledge across languages
Assesses uneven knowledge coverage in multilingual datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual closed-book QA dataset ECLeKTic
Evaluates cross-lingual knowledge transfer
Uses Wikipedia article presence/absence control
🔎 Similar Papers
No similar papers found.