Ask a Local: Detecting Hallucinations With Specialized Model Divergence

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of efficient, general-purpose detection methods for hallucinations—factually incorrect yet plausible content generated by large language models (LLMs)—this paper proposes a training-free, fine-tuning-free, zero-shot multilingual hallucination localization method. Our approach leverages perplexity distribution disparities across text spans, quantified via KL or Jensen–Shannon divergence, using lightweight domain-specific language models. Crucially, it requires no external knowledge bases, language-specific adaptation, or annotated data, enabling native multilingual support. Evaluated on human-annotated QA datasets spanning 14 languages, the method achieves an average Intersection-over-Union (IoU) of 0.30, with peak performance in Italian (0.42) and Catalan (0.38); Spearman correlation with human judgments remains consistently high. The implementation and model architecture are publicly released.

Technology Category

Application Category

📝 Abstract
Hallucinations in large language models (LLMs) - instances where models generate plausible but factually incorrect information - present a significant challenge for AI. We introduce"Ask a Local", a novel hallucination detection method exploiting the intuition that specialized models exhibit greater surprise when encountering domain-specific inaccuracies. Our approach computes divergence between perplexity distributions of language-specialized models to identify potentially hallucinated spans. Our method is particularly well-suited for a multilingual context, as it naturally scales to multiple languages without the need for adaptation, relying on external data sources, or performing training. Moreover, we select computationally efficient models, providing a scalable solution that can be applied to a wide range of languages and domains. Our results on a human-annotated question-answer dataset spanning 14 languages demonstrate consistent performance across languages, with Intersection-over-Union (IoU) scores around 0.3 and comparable Spearman correlation values. Our model shows particularly strong performance on Italian and Catalan, with IoU scores of 0.42 and 0.38, respectively, while maintaining cross-lingual effectiveness without language-specific adaptations. We release our code and architecture to facilitate further research in multilingual hallucination detection.
Problem

Research questions and friction points this paper is trying to address.

Detecting hallucinations in large language models (LLMs)
Using specialized model divergence for multilingual accuracy
Scalable solution without external data or training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects hallucinations using specialized model divergence
Scales to multiple languages without adaptation
Uses computationally efficient models for scalability
🔎 Similar Papers
No similar papers found.
Aldan Creo
Aldan Creo
UC San Diego
AI-Generated Text DetectionAI HallucinationsAI FairnessAI Security
H
Héctor Cerezo-Costas
Fundación Centro Tecnolóxico de Telecomunicacións de Galicia (GRADIANT), Vigo, ES
P
Pedro Alonso-Doval
Fundación Centro Tecnolóxico de Telecomunicacións de Galicia (GRADIANT), Vigo, ES
M
Maximiliano Hormazábal-Lagos
Fundación Centro Tecnolóxico de Telecomunicacións de Galicia (GRADIANT), Vigo, ES