Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training

📅 2024-07-12
🏛️ arXiv.org
📈 Citations: 23
Influential: 4
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit a “refusal position bias” during safety fine-tuning—i.e., they predominantly refuse harmful queries only at response beginnings or endings—resulting in insufficient end-to-end safety enforcement. To address this, we propose Decoupled Refusal Training (DeRTa), the first framework enabling position-agnostic refusal learning. DeRTa introduces two core components: (1) hazard-response prefix guidance coupled with prefix-augmented maximum likelihood estimation (MLE) for fine-grained refusal modeling, and (2) a Reinforcement Transition Optimization (RTO) mechanism that enables dynamic, context-aware safety interception at arbitrary token positions within the response. Extensive experiments on LLaMA-3 and Mistral-family models across six adversarial attack scenarios demonstrate that DeRTa achieves significantly higher safety robustness than state-of-the-art baselines while preserving general-purpose capabilities—confirming no trade-off between safety and utility.

Technology Category

Application Category

📝 Abstract
This study addresses a critical gap in safety tuning practices for Large Language Models (LLMs) by identifying and tackling a refusal position bias within safety tuning data, which compromises the models' ability to appropriately refuse generating unsafe content. We introduce a novel approach, Decoupled Refusal Training (DeRTa), designed to empower LLMs to refuse compliance to harmful prompts at any response position, significantly enhancing their safety capabilities. DeRTa incorporates two novel components: (1) Maximum Likelihood Estimation (MLE) with Harmful Response Prefix, which trains models to recognize and avoid unsafe content by appending a segment of harmful response to the beginning of a safe response, and (2) Reinforced Transition Optimization (RTO), which equips models with the ability to transition from potential harm to safety refusal consistently throughout the harmful response sequence. Our empirical evaluation, conducted using LLaMA3 and Mistral model families across six attack scenarios, demonstrates that our method not only improves model safety without compromising performance but also surpasses baseline methods in defending against attacks.
Problem

Research questions and friction points this paper is trying to address.

Addresses refusal position bias in LLM safety tuning
Enhances LLM ability to refuse harmful prompts flexibly
Improves safety without compromising model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled Refusal Training (DeRTa) enhances LLM safety
MLE with Harmful Response Prefix avoids unsafe content
Reinforced Transition Optimization ensures consistent safety refusal
🔎 Similar Papers
No similar papers found.