Contrastive Reasoning Alignment: Reinforcement Learning from Hidden Representations

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of relying solely on output-layer defenses against jailbreak attacks in large language models by proposing a novel approach that aligns safe and unsafe reasoning trajectories in the hidden representation space. By uniquely integrating contrastive representation learning with GRPO-based reinforcement learning, the method optimizes hidden states throughout the reasoning process, theoretically eliminating superficially consistent but suboptimal policies to achieve deep safety alignment. Evaluated on Qwen3-4B-Thinking and R1-Distill-Llama-8B, the approach improves reasoning safety by 79.0% on average and boosts final response safety by 87.7%, substantially outperforming state-of-the-art baselines such as IPO and SafeKey.

Technology Category

Application Category

📝 Abstract
We propose CRAFT, a red-teaming alignment framework that leverages model reasoning capabilities and hidden representations to improve robustness against jailbreak attacks. Unlike prior defenses that operate primarily at the output level, CRAFT aligns large reasoning models to generate safety-aware reasoning traces by explicitly optimizing objectives defined over the hidden state space. Methodologically, CRAFT integrates contrastive representation learning with reinforcement learning to separate safe and unsafe reasoning trajectories, yielding a latent-space geometry that supports robust, reasoning-level safety alignment. Theoretically, we show that incorporating latent-textual consistency into GRPO eliminates superficially aligned policies by ruling them out as local optima. Empirically, we evaluate CRAFT on multiple safety benchmarks using two strong reasoning models, Qwen3-4B-Thinking and R1-Distill-Llama-8B, where it consistently outperforms state-of-the-art defenses such as IPO and SafeKey. Notably, CRAFT delivers an average 79.0% improvement in reasoning safety and 87.7% improvement in final-response safety over the base models, demonstrating the effectiveness of hidden-space reasoning alignment.
Problem

Research questions and friction points this paper is trying to address.

jailbreak attacks
reasoning alignment
safety
hidden representations
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive Reasoning Alignment
Hidden Representations
Reinforcement Learning
Safety Alignment
Reasoning Trajectories
🔎 Similar Papers
No similar papers found.