Interpreting Learned Feedback Patterns in Large Language Models

📅 2023-10-12
🏛️ Neural Information Processing Systems
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) genuinely internalize human preferences during Reinforcement Learning from Human Feedback (RLHF) training. To formalize this, we introduce the *Learned Feedback Pattern* (LFP)—a latent neural representation encoding human preference signals. Methodologically, we employ interpretable activation probing, sparse low-dimensional compression, and neuro-semantic cross-validation to empirically define and quantify LFP for the first time; further, we construct a cross-model semantic alignment framework using GPT-4 to assess LFP’s decodability and generalizability. Results demonstrate that LFP is stably present across RLHF-trained models and highly decodable (prediction accuracy significantly surpassing baselines); probe outputs strongly correlate with actual human feedback; and GPT-4’s semantic descriptions of LFP-associated features align closely with neurally identified activations. This work establishes a novel, interpretable, and empirically verifiable paradigm for understanding RLHF alignment mechanisms, evaluating behavioral consistency, and enhancing LLM safety.
📝 Abstract
Reinforcement learning from human feedback (RLHF) is widely used to train large language models (LLMs). However, it is unclear whether LLMs accurately learn the underlying preferences in human feedback data. We coin the term extit{Learned Feedback Pattern} (LFP) for patterns in an LLM's activations learned during RLHF that improve its performance on the fine-tuning task. We hypothesize that LLMs with LFPs accurately aligned to the fine-tuning feedback exhibit consistent activation patterns for outputs that would have received similar feedback during RLHF. To test this, we train probes to estimate the feedback signal implicit in the activations of a fine-tuned LLM. We then compare these estimates to the true feedback, measuring how accurate the LFPs are to the fine-tuning feedback. Our probes are trained on a condensed, sparse and interpretable representation of LLM activations, making it easier to correlate features of the input with our probe's predictions. We validate our probes by comparing the neural features they correlate with positive feedback inputs against the features GPT-4 describes and classifies as related to LFPs. Understanding LFPs can help minimize discrepancies between LLM behavior and training objectives, which is essential for the safety of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Analyzing learned feedback patterns in RLHF-trained LLMs
Measuring alignment between model activations and human feedback
Improving LLM safety by minimizing objective discrepancies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probes estimate feedback from activations
Sparse interpretable representation of activations
Compare neural features with GPT-4 classifications
🔎 Similar Papers
No similar papers found.
L
Luke Marks
Apart Research
A
Amir Abdullah
Apart Research
Clement Neo
Clement Neo
Apart Research, Nanyang Techonological University
R
Rauno Arike
Apart Research
D
David Krueger
University of Cambridge
P
Philip H. S. Torr
Department of Engineering Sciences, University of Oxford
Fazl Barez
Fazl Barez
University of Oxford
AI SafetyExplainabilityInterpretabilityAI Governance and Policy