Evaluating Defences against Unsafe Feedback in RLHF

📅 2024-09-19
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a critical risk in Reinforcement Learning from Human Feedback (RLHF): “harmful feedback” triggers safety-alignment collapse, as existing safety mechanisms commonly fail under RL training, causing models to generate harmful content and violate safety constraints. We present the first systematic analysis of the origins and propagation mechanisms of unsafe feedback and introduce the novel concept of “harmless reward hacking,” formally characterized via Constrained Markov Decision Processes (CMDPs). Our method integrates implicit fine-tuning, explicit constraint optimization, and robust reward modeling to construct a verifiable safe RLHF training framework. Experiments demonstrate that mainstream defenses significantly degrade during RLHF’s dynamic optimization, whereas our approach effectively suppresses reward hacking while preserving task performance and substantially improving safety-alignment robustness. This work establishes both theoretical foundations and practical pathways for trustworthy AI alignment.

Technology Category

Application Category

📝 Abstract
While there has been progress towards aligning Large Language Models (LLMs) with human values and ensuring safe behaviour at inference time, safety guards can easily be removed when fine tuned on unsafe and harmful datasets. While this setting has been treated extensively, another popular training paradigm, learning from unsafe feedback with reinforcement learning, has previously been unexplored. This is concerning due to the widespread deployment of feedback collection systems. We address this gap by providing an analysis of learning settings where feedback is harmful, i.e. that unsafe samples are preferred over safe ones despite model developers goal to maintain safety. We find that safety-aligned LLMs easily explore unsafe action spaces via generating harmful text and optimize for reward that violates safety constraints indicating that current safety guards are not enough to prevent learning from unsafe feedback. In order to protect against this vulnerability, we adapt a number of both"implict"and"explicit"harmful fine-tuning defences to evaluate whether they are effective as learning constraints in an RLHF setting finding that no method is generally effective pointing to the need for more defence research. We end the paper with the observation that some defences work by performing"harmless reward hacking"for which we provide a theoretical explanation drawn from the theory of Constrained Markov Decision Processes and provide some direction for future defence development.
Problem

Research questions and friction points this paper is trying to address.

Analyzes learning from harmful feedback in RLHF.
Evaluates defences against unsafe fine-tuning in LLMs.
Identifies need for improved safety guards in training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes harmful feedback in RLHF
Evaluates implicit and explicit defences
Proposes harmless reward hacking theory