🤖 AI Summary
This work investigates whether large language models (LLMs) genuinely internalize human preferences during Reinforcement Learning from Human Feedback (RLHF) training. To formalize this, we introduce the *Learned Feedback Pattern* (LFP)—a latent neural representation encoding human preference signals. Methodologically, we employ interpretable activation probing, sparse low-dimensional compression, and neuro-semantic cross-validation to empirically define and quantify LFP for the first time; further, we construct a cross-model semantic alignment framework using GPT-4 to assess LFP’s decodability and generalizability. Results demonstrate that LFP is stably present across RLHF-trained models and highly decodable (prediction accuracy significantly surpassing baselines); probe outputs strongly correlate with actual human feedback; and GPT-4’s semantic descriptions of LFP-associated features align closely with neurally identified activations. This work establishes a novel, interpretable, and empirically verifiable paradigm for understanding RLHF alignment mechanisms, evaluating behavioral consistency, and enhancing LLM safety.
📝 Abstract
Reinforcement learning from human feedback (RLHF) is widely used to train large language models (LLMs). However, it is unclear whether LLMs accurately learn the underlying preferences in human feedback data. We coin the term extit{Learned Feedback Pattern} (LFP) for patterns in an LLM's activations learned during RLHF that improve its performance on the fine-tuning task. We hypothesize that LLMs with LFPs accurately aligned to the fine-tuning feedback exhibit consistent activation patterns for outputs that would have received similar feedback during RLHF. To test this, we train probes to estimate the feedback signal implicit in the activations of a fine-tuned LLM. We then compare these estimates to the true feedback, measuring how accurate the LFPs are to the fine-tuning feedback. Our probes are trained on a condensed, sparse and interpretable representation of LLM activations, making it easier to correlate features of the input with our probe's predictions. We validate our probes by comparing the neural features they correlate with positive feedback inputs against the features GPT-4 describes and classifies as related to LFPs. Understanding LFPs can help minimize discrepancies between LLM behavior and training objectives, which is essential for the safety of LLMs.