Mapping Clinical Doubt: Locating Linguistic Uncertainty in LLMs

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited transparency in processing linguistic uncertainty—e.g., epistemic modality markers such as “possible,” “suspected,” or “to be ruled out”—which are critical for clinical reasoning and diagnostic reliability. Method: We introduce the Model Sensitivity to Uncertainty (MSU) metric, leveraging a curated contrastive clinical corpus and layer-wise linear probing to quantify how hidden-layer activations respond to uncertainty cues across model depth. Contribution/Results: We demonstrate that uncertainty is hierarchically and progressively encoded in deeper LLM layers, exhibiting strong depth dependence; this structured sensitivity is both localizable and quantifiable. To our knowledge, this is the first systematic investigation revealing the internal representational principles underlying LLMs’ handling of clinical uncertainty. Our findings establish an interpretable foundation for enhancing diagnostic explainability, cognitive fidelity, and trustworthy deployment of AI in clinical settings—introducing a novel evaluation paradigm grounded in mechanistic interpretability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly used in clinical settings, where sensitivity to linguistic uncertainty can influence diagnostic interpretation and decision-making. Yet little is known about where such epistemic cues are internally represented within these models. Distinct from uncertainty quantification, which measures output confidence, this work examines input-side representational sensitivity to linguistic uncertainty in medical text. We curate a contrastive dataset of clinical statements varying in epistemic modality (e.g., 'is consistent with' vs. 'may be consistent with') and propose Model Sensitivity to Uncertainty (MSU), a layerwise probing metric that quantifies activation-level shifts induced by uncertainty cues. Our results show that LLMs exhibit structured, depth-dependent sensitivity to clinical uncertainty, suggesting that epistemic information is progressively encoded in deeper layers. These findings reveal how linguistic uncertainty is internally represented in LLMs, offering insight into their interpretability and epistemic reliability.
Problem

Research questions and friction points this paper is trying to address.

Locating linguistic uncertainty in LLMs
Measuring sensitivity to epistemic cues in medical text
Revealing internal representation of clinical doubt
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probing metric quantifies activation shifts from uncertainty cues
Contrastive dataset varies epistemic modality in clinical statements
Layerwise analysis shows depth-dependent sensitivity to uncertainty
🔎 Similar Papers
No similar papers found.
S
Srivarshinee Sridhar
Vellore Institute of Technology, Chennai
R
Raghav Kaushik Ravi
Vellore Institute of Technology, Chennai
Kripabandhu Ghosh
Kripabandhu Ghosh
Assistant Professor, IISER Kolkata, India
Information RetrievalMachine Learning