Random-Set Large Language Models

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of quantifying epistemic uncertainty and enhancing credibility in large language model (LLM) text generation. We propose Random-Set LLMs (RSLLMs), which replace conventional token-level probability distributions with finite random sets over the token space, explicitly modeling uncertainty via Dempster–Shafer belief functions. To ensure interpretability and computational tractability, RSLLMs introduce hierarchical token clustering to construct focal subsets—enabling explainable uncertainty representation, natural hallucination detection, and second-order uncertainty estimation. Experiments on Llama2-7b, Mistral-7b, and Phi-2 demonstrate that RSLLMs achieve higher accuracy than baseline models on CoQA and OBQA, while significantly improving confidence calibration and hallucination identification performance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are known to produce very high-quality tests and responses to our queries. But how much can we trust this generated text? In this paper, we study the problem of uncertainty quantification in LLMs. We propose a novel Random-Set Large Language Model (RSLLM) approach which predicts finite random sets (belief functions) over the token space, rather than probability vectors as in classical LLMs. In order to allow so efficiently, we also present a methodology based on hierarchical clustering to extract and use a budget of"focal"subsets of tokens upon which the belief prediction is defined, rather than using all possible collections of tokens, making the method scalable yet effective. RS-LLMs encode the epistemic uncertainty induced in their generation process by the size and diversity of its training set via the size of the credal sets associated with the predicted belief functions. The proposed approach is evaluated on CoQA and OBQA datasets using Llama2-7b, Mistral-7b and Phi-2 models and is shown to outperform the standard model in both datasets in terms of correctness of answer while also showing potential in estimating the second level uncertainty in its predictions and providing the capability to detect when its hallucinating.
Problem

Research questions and friction points this paper is trying to address.

Quantifying uncertainty in Large Language Models (LLMs)
Predicting finite random sets over token space
Detecting hallucinations and estimating prediction uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predicts finite random sets over token space
Uses hierarchical clustering for focal token subsets
Encodes epistemic uncertainty via credal sets
🔎 Similar Papers
No similar papers found.