🤖 AI Summary
This work investigates how Latent Adversarial Training (LAT) reshapes the representation of refusal to harmful instructions in the latent space of Llama 2-7B and its implications for model safety. We employ activation difference analysis, singular value decomposition (SVD), and comparative experiments between LAT and embedding-space adversarial training. Our key finding—first reported herein—is that LAT compresses refusal behavior predominantly into the top two SVD principal components, capturing 75% of variance; this yields a compact, linearly separable, and cross-model transferable refusal direction. Such structured representation enhances robustness against black-box cross-model attacks but reveals a novel self-attack vulnerability: the model exhibits heightened sensitivity to its own generated refusal vectors. The study uncovers LAT-driven structural alignment of refusal in latent space, establishing a quantifiable, interpretable dimension for evaluating safety-focused fine-tuning.
📝 Abstract
Recent work has shown that language models' refusal behavior is primarily encoded in a single direction in their latent space, making it vulnerable to targeted attacks. Although Latent Adversarial Training (LAT) attempts to improve robustness by introducing noise during training, a key question remains: How does this noise-based training affect the underlying representation of refusal behavior? Understanding this encoding is crucial for evaluating LAT's effectiveness and limitations, just as the discovery of linear refusal directions revealed vulnerabilities in traditional supervised safety fine-tuning (SSFT). Through the analysis of Llama 2 7B, we examine how LAT reorganizes the refusal behavior in the model's latent space compared to SSFT and embedding space adversarial training (AT). By computing activation differences between harmful and harmless instruction pairs and applying Singular Value Decomposition (SVD), we find that LAT significantly alters the refusal representation, concentrating it in the first two SVD components which explain approximately 75 percent of the activation differences variance - significantly higher than in reference models. This concentrated representation leads to more effective and transferable refusal vectors for ablation attacks: LAT models show improved robustness when attacked with vectors from reference models but become more vulnerable to self-generated vectors compared to SSFT and AT. Our findings suggest that LAT's training perturbations enable a more comprehensive representation of refusal behavior, highlighting both its potential strengths and vulnerabilities for improving model safety.