Calibration Across Layers: Understanding Calibration Evolution in LLMs

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the conventional assumption that calibration in large language models (LLMs) occurs solely at the output layer, investigating instead how calibration capabilities evolve dynamically across layers during forward propagation. Method: We propose that calibration is a distributed, cross-layer phenomenon and identify interpretable, low-dimensional calibration directions within the residual stream. Leveraging the MMLU benchmark, we conduct targeted interventions on the residual stream across multiple open-source LLMs using entropy-based neuron analysis and null-space projection of unembedding matrices. Contribution/Results: Experiments reveal a distinct confidence-refinement stage in upper layers; targeted interventions significantly improve calibration metrics—including Expected Calibration Error (ECE) and Maximum Calibration Error (MCE)—without degrading accuracy. To our knowledge, this is the first systematic study to uncover the layer-wise dynamics of internal calibration in LLMs and to provide a controllable, interpretable pathway for cross-layer calibration intervention.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated inherent calibration capabilities, where predicted probabilities align well with correctness, despite prior findings that deep neural networks are often overconfident. Recent studies have linked this behavior to specific components in the final layer, such as entropy neurons and the unembedding matrix null space. In this work, we provide a complementary perspective by investigating how calibration evolves throughout the network depth. Analyzing multiple open-weight models on the MMLU benchmark, we uncover a distinct confidence correction phase in the upper/later layers, where model confidence is actively recalibrated after decision certainty has been reached. Furthermore, we identify a low-dimensional calibration direction in the residual stream whose perturbation significantly improves calibration metrics (ECE and MCE) without harming accuracy. Our findings suggest that calibration is a distributed phenomenon, shaped throughout the network forward pass, not just in its final projection, providing new insights into how confidence-regulating mechanisms operate within LLMs.
Problem

Research questions and friction points this paper is trying to address.

Investigating how calibration evolves across different layers in LLMs
Identifying confidence correction mechanisms in upper network layers
Discovering low-dimensional calibration directions that improve metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing calibration evolution across network layers
Identifying confidence correction phase in later layers
Discovering low-dimensional calibration direction in residual stream
🔎 Similar Papers
No similar papers found.
A
Abhinav Joshi
Department of Computer Science and Engineering, Indian Institute of Technology Kanpur (IIT Kanpur)
A
Areeb Ahmad
Department of Computer Science and Engineering, Indian Institute of Technology Kanpur (IIT Kanpur)
Ashutosh Modi
Ashutosh Modi
Indian Institute of Technology Kanpur
Natural Language ProcessingMachine and Deep LearningArtificial IntelligenceAffective ComputingLegal AI