CLUE: Non-parametric Verification from Experience via Hidden-State Clustering

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM output verification methods rely on text-level signals (e.g., reward models) or calibrated token probabilities, suffering from overfitting or sensitivity to calibration quality. This paper proposes CLUE, a non-parametric verification method based on clustering hidden-state trajectories. Its core innovation is the first direct exploitation of the geometric structure of internal model activations: it constructs differential representations of reasoning paths, builds prototypical success/failure experiences in the latent space, and enables unsupervised discrimination via nearest-centroid distance. Experiments show that CLUE outperforms LLM-as-a-judge baselines on AIME 2024/2025 and GPQA, matching state-of-the-art confidence-based methods. On AIME 2024, it lifts the accuracy of a 1.5B-parameter model from 56.7% to 70.0%, empirically confirming that correct reasoning exhibits separable geometric structure in the hidden space.

Technology Category

Application Category

📝 Abstract
Assessing the quality of Large Language Model (LLM) outputs presents a critical challenge. Previous methods either rely on text-level information (e.g., reward models, majority voting), which can overfit to superficial cues, or on calibrated confidence from token probabilities, which would fail on less-calibrated models. Yet both of these signals are, in fact, partial projections of a richer source of information: the model's internal hidden states. Early layers, closer to token embeddings, preserve semantic and lexical features that underpin text-based judgments, while later layers increasingly align with output logits, embedding confidence-related information. This paper explores hidden states directly as a unified foundation for verification. We show that the correctness of a solution is encoded as a geometrically separable signature within the trajectory of hidden activations. To validate this, we present Clue (Clustering and Experience-based Verification), a deliberately minimalist, non-parametric verifier. With no trainable parameters, CLUE only summarizes each reasoning trace by an hidden state delta and classifies correctness via nearest-centroid distance to ``success'' and ``failure'' clusters formed from past experience. The simplicity of this method highlights the strength of the underlying signal. Empirically, CLUE consistently outperforms LLM-as-a-judge baselines and matches or exceeds modern confidence-based methods in reranking candidates, improving both top-1 and majority-vote accuracy across AIME 24/25 and GPQA. As a highlight, on AIME 24 with a 1.5B model, CLUE boosts accuracy from 56.7% (majority@64) to 70.0% (top-maj@16).
Problem

Research questions and friction points this paper is trying to address.

Verifying LLM output quality using internal hidden states
Overcoming limitations of text-level and confidence-based verification
Developing non-parametric verification via hidden state clustering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses hidden states as unified verification foundation
Clusters hidden state deltas from past experience
Classifies correctness via nearest centroid distance
🔎 Similar Papers
No similar papers found.