MKA: Leveraging Cross-Lingual Consensus for Model Abstention

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from persistent deficiencies in factual accuracy and confidence calibration, hindering their trustworthy deployment. To address this, we propose a novel confidence calibration paradigm grounded in multilingual response consistency: leveraging multilingual prompts to elicit cross-lingual knowledge from LLMs, aggregating responses across languages to derive language-agnostic consensus confidence scores, and dynamically triggering refusal mechanisms based on uncertainty estimates. Our approach requires no external supervision or model fine-tuning, enabling adaptive, language-independent uncertainty quantification and proactive response rejection. Evaluated on multilingual fact-checking tasks, it improves accuracy by 71.2% on Bengali and 15.5% on English benchmarks, substantially enhancing factual consistency and output reliability. The method offers a scalable, parameter-free pathway toward improving LLM trustworthiness, advancing research on robust and calibrated language model behavior.

Technology Category

Application Category

📝 Abstract
Reliability of LLMs is questionable even as they get better at more tasks. A wider adoption of LLMs is contingent on whether they are usably factual. And if they are not, on whether they can properly calibrate their confidence in their responses. This work focuses on utilizing the multilingual knowledge of an LLM to inform its decision to abstain or answer when prompted. We develop a multilingual pipeline to calibrate the model's confidence and let it abstain when uncertain. We run several multilingual models through the pipeline to profile them across different languages. We find that the performance of the pipeline varies by model and language, but that in general they benefit from it. This is evidenced by the accuracy improvement of $71.2%$ for Bengali over a baseline performance without the pipeline. Even a high-resource language like English sees a $15.5%$ improvement. These results hint at possible further improvements.
Problem

Research questions and friction points this paper is trying to address.

Improving LLM reliability by cross-lingual confidence calibration
Enhancing model abstention decisions using multilingual knowledge
Boosting accuracy across languages via calibrated uncertainty handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual pipeline for confidence calibration
Cross-lingual consensus for abstention decisions
Accuracy improvement across diverse languages
🔎 Similar Papers
No similar papers found.