🤖 AI Summary
This work identifies a fundamental limitation of token-level label probability–based classification in in-context learning (ICL), which yields suboptimal decision boundaries. To address this, we propose Hidden Calibration: instead of relying on output token probabilities, our method leverages the final-layer hidden states of large language models (LLMs) to construct class centroids in the latent space and performs nearest-centroid classification. We are the first to empirically demonstrate that ICL demonstration examples induce linearly separable, intra-class compact structures in the latent space. Our approach further introduces calibration-set–driven centroid estimation and latent-space metric learning. Evaluated across six LLMs and ten classification benchmarks, Hidden Calibration consistently outperforms state-of-the-art token-level ICL baselines by 20–50%, establishing new ICL SOTA and significantly mitigating inter-class overlap.
📝 Abstract
In-Context Learning (ICL) typically utilizes classification criteria from output probabilities of manually selected label tokens. However, we argue that such token-based classification criteria lead to suboptimal decision boundaries, despite delicate calibrations through translation and constrained rotation applied. To address this problem, we propose Hidden Calibration, which renounces token probabilities and uses the nearest centroid classifier on the LM's last hidden states. In detail, we assign the label of the nearest centroid previously estimated from a calibration set to the test sample as the predicted label. Our experiments on 6 models and 10 classification datasets indicate that Hidden Calibration consistently outperforms current token-based baselines by about 20%~50%, achieving a strong state-of-the-art in ICL. Our further analysis demonstrates that Hidden Calibration finds better classification criteria with less inter-class overlap, and LMs provide linearly separable intra-class clusters with the help of demonstrations, which supports Hidden Calibration and gives new insights into the principle of ICL. Our official code implementation can be found at https://github.com/hc495/Hidden_Calibration.