🤖 AI Summary
This study addresses the fundamental challenge of safety alignment in large language models (LLMs): enhancing their ability to discriminate between safe and harmful inputs. The authors formalize safety alignment as estimating the KL divergence between aligned (safe) and unaligned (harmful) input distributions, and empirically observe separability of their latent-space representations.
Method: We propose KLDO, a novel alignment framework grounded in divergence estimation. KLDO leverages compliance-rejection data—demonstrated theoretically for the first time to be superior to preference data—and constructs a latent-space representation distance as a statistically significant metric (p < 0.01) for jailbreak robustness.
Contributions/Results: Through ablation studies involving divergence estimation, distance-based evaluation, and comparison with RLHF variants, KLDO achieves significantly improved separation between safe and harmful prompts. Crucially, the proposed latent distance metric exhibits strong empirical correlation with jailbreak robustness, offering both interpretability and quantitative assessment of alignment efficacy.
📝 Abstract
We propose a theoretical framework demonstrating that popular Large Language Model (LLM) alignment methods, including Reinforcement Learning from Human Feedback (RLHF) and alternatives, fundamentally function as divergence estimators between aligned (preferred or safe) and unaligned (less-preferred or harmful) distributions. This explains the separation phenomenon between safe and harmful prompts in the model hidden representation after alignment. Inspired by the theoretical results, we identify that some alignment methods are better than others in terms of separation and, introduce a new method, KLDO, and further demonstrate the implication of our theories. We advocate for compliance-refusal datasets over preference datasets to enhance safety alignment, supported by both theoretical reasoning and empirical evidence. Additionally, to quantify safety separation, we leverage a distance metric in the representation space and statistically validate its efficacy as a statistical significant indicator of LLM resilience against jailbreak attacks.