From Evidence to Belief: A Bayesian Epistemology Approach to Language Models

📅 2025-04-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how large language models (LLMs) update beliefs under Bayesian epistemology, specifically examining confidence calibration and response consistency when exposed to evidence varying in informativeness and reliability. Method: We construct a multi-type evidence dataset and employ a tripartite evaluation—verbalized confidence reports, token-level probability analysis, and sampled response distributions—to systematically assess LLMs’ adherence to core Bayesian tenets: confirmation, evidence-weight sensitivity, and suppression of unreliable evidence. Contribution/Results: We find that LLMs approximately satisfy the confirmation principle only under ground-truth evidence; they significantly deviate from Bayesian predictions under noisy, contradictory, or irrelevant evidence, exhibiting weak correlation between confidence and accuracy. Notably, LLMs display a “golden evidence” bias—over-relying on high-precision cues—and anomalous sensitivity to evidential irrelevance. These findings uncover intrinsic cognitive biases in LLMs, providing both theoretical grounding and empirical evidence for building trustworthy AI with sound belief modeling.

Technology Category

Application Category

📝 Abstract
This paper investigates the knowledge of language models from the perspective of Bayesian epistemology. We explore how language models adjust their confidence and responses when presented with evidence with varying levels of informativeness and reliability. To study these properties, we create a dataset with various types of evidence and analyze language models' responses and confidence using verbalized confidence, token probability, and sampling. We observed that language models do not consistently follow Bayesian epistemology: language models follow the Bayesian confirmation assumption well with true evidence but fail to adhere to other Bayesian assumptions when encountering different evidence types. Also, we demonstrated that language models can exhibit high confidence when given strong evidence, but this does not always guarantee high accuracy. Our analysis also reveals that language models are biased toward golden evidence and show varying performance depending on the degree of irrelevance, helping explain why they deviate from Bayesian assumptions.
Problem

Research questions and friction points this paper is trying to address.

How language models adjust confidence with varying evidence
Language models' inconsistency with Bayesian epistemology assumptions
Biases in models toward golden evidence affecting accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian epistemology approach for language models
Dataset with varied evidence types for analysis
Verbalized confidence and token probability metrics
🔎 Similar Papers
No similar papers found.
M
Minsu Kim
KAIST AI
Sangryul Kim
Sangryul Kim
NAVER Corp.
NLPLLMQuestion AnsweringRetrieval
J
James Thorne
KAIST AI