Can We Predict Alignment Before Models Finish Thinking? Towards Monitoring Misaligned Reasoning Models

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of real-time detection of unsafe outputs during chain-of-thought (CoT) generation in open-weight autoregressive language models. We propose an early alignment prediction method based on hidden-layer activation states, introducing a lightweight linear probe that identifies misalignment signals directly from intermediate layer activations—without relying on generated text—enabling proactive detection of harmful responses prior to inference completion. Evaluated across multiple model scales (7B–70B) and safety benchmarks (e.g., SafeBench, ToxiGen), our approach significantly outperforms output-text-based detection baselines. Rigorous validation via both human evaluation and large-model judgment ensures reliability. The method enables low-overhead, real-time safety monitoring and intervention during inference, offering a scalable, activation-driven paradigm for inference-time model alignment.

Technology Category

Application Category

📝 Abstract
Open-weights reasoning language models generate long chains-of-thought (CoTs) before producing a final response, which improves performance but introduces additional alignment risks, with harmful content often appearing in both the CoTs and the final outputs. In this work, we investigate if we can use CoTs to predict final response misalignment. We evaluate a range of monitoring approaches, including humans, highly-capable large language models, and text classifiers, using either CoT text or activations. First, we find that a simple linear probe trained on CoT activations can significantly outperform all text-based methods in predicting whether a final response will be safe or unsafe. CoT texts are often unfaithful and can mislead humans and classifiers, while model latents (i.e., CoT activations) offer a more reliable predictive signal. Second, the probe makes accurate predictions before reasoning completes, achieving strong performance even when applied to early CoT segments. These findings generalize across model sizes, families, and safety benchmarks, suggesting that lightweight probes could enable real-time safety monitoring and early intervention during generation.
Problem

Research questions and friction points this paper is trying to address.

Predict final response misalignment using chains-of-thought (CoTs)
Evaluate monitoring approaches for safe or unsafe model outputs
Enable real-time safety monitoring with lightweight probes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear probe predicts misalignment from activations
Early CoT segments enable real-time safety monitoring
Model latents outperform text-based alignment prediction
🔎 Similar Papers
No similar papers found.