It's All About the Confidence: An Unsupervised Approach for Multilingual Historical Entity Linking using Large Language Models

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes MHEL-LLaMo, an unsupervised multilingual entity linking framework designed to address the challenges posed by linguistic variation, noise, and semantic drift in historical texts. Unlike existing approaches that rely heavily on annotated data or non-scalable rule-based systems, MHEL-LLaMo introduces a novel dynamic scheduling mechanism that leverages the confidence of a small language model (SLM) to selectively invoke a large language model (LLM) only for difficult instances—specifically for candidate selection and NIL prediction—enhancing accuracy through prompt chaining. The framework integrates the multilingual bi-encoder BELA for efficient retrieval and operates without fine-tuning, enabling high-precision linking even on low-resource historical corpora. Evaluated across four benchmarks spanning six European languages, MHEL-LLaMo significantly outperforms state-of-the-art methods, demonstrating strong efficiency, scalability, and cross-lingual generalization.

Technology Category

Application Category

📝 Abstract
Despite the recent advancements in NLP with the advent of Large Language Models (LLMs), Entity Linking (EL) for historical texts remains challenging due to linguistic variation, noisy inputs, and evolving semantic conventions. Existing solutions either require substantial training data or rely on domain-specific rules that limit scalability. In this paper, we present MHEL-LLaMo (Multilingual Historical Entity Linking with Large Language MOdels), an unsupervised ensemble approach combining a Small Language Model (SLM) and an LLM. MHEL-LLaMo leverages a multilingual bi-encoder (BELA) for candidate retrieval and an instruction-tuned LLM for NIL prediction and candidate selection via prompt chaining. Our system uses SLM's confidence scores to discriminate between easy and hard samples, applying an LLM only for hard cases. This strategy reduces computational costs while preventing hallucinations on straightforward cases. We evaluate MHEL-LLaMo on four established benchmarks in six European languages (English, Finnish, French, German, Italian and Swedish) from the 19th and 20th centuries. Results demonstrate that MHEL-LLaMo outperforms state-of-the-art models without requiring fine-tuning, offering a scalable solution for low-resource historical EL. The implementation of MHEL-LLaMo is available on Github.
Problem

Research questions and friction points this paper is trying to address.

Entity Linking
Historical Texts
Multilingual
Large Language Models
Unsupervised
Innovation

Methods, ideas, or system contributions that make the work stand out.

unsupervised entity linking
large language models
confidence-based routing
multilingual historical NLP
prompt chaining