🤖 AI Summary
Human feedback often contains implicit, unarticulated preferences, leading to uncontrollable and uninterpretable language model behavior. To address this, we propose WIMHF—a novel method that, for the first time, automatically discovers interpretable, fine-grained preference features directly from human feedback data using sparse autoencoders, without requiring prior assumptions. Our approach comprises three stages: preference signal decomposition, feature importance analysis, and feature-guided relabeling—enabling disentanglement and reconstruction of opaque preference predictions. Evaluated on seven benchmark datasets, WIMHF identifies semantically meaningful features that dominate preference prediction, improving safety by 37% and significantly enhancing personalized preference modeling in community alignment tasks. This work establishes the first interpretable and scalable computational framework for understanding and controllably editing human feedback signals in language model alignment.
📝 Abstract
Human feedback can alter language models in unpredictable and undesirable ways, as practitioners lack a clear understanding of what feedback data encodes. While prior work studies preferences over certain attributes (e.g., length or sycophancy), automatically extracting relevant features without pre-specifying hypotheses remains challenging. We introduce What's In My Human Feedback? (WIMHF), a method to explain feedback data using sparse autoencoders. WIMHF characterizes both (1) the preferences a dataset is capable of measuring and (2) the preferences that the annotators actually express. Across 7 datasets, WIMHF identifies a small number of human-interpretable features that account for the majority of the preference prediction signal achieved by black-box models. These features reveal a wide diversity in what humans prefer, and the role of dataset-level context: for example, users on Reddit prefer informality and jokes, while annotators in HH-RLHF and PRISM disprefer them. WIMHF also surfaces potentially unsafe preferences, such as that LMArena users tend to vote against refusals, often in favor of toxic content. The learned features enable effective data curation: re-labeling the harmful examples in Arena yields large safety gains (+37%) with no cost to general performance. They also allow fine-grained personalization: on the Community Alignment dataset, we learn annotator-specific weights over subjective features that improve preference prediction. WIMHF provides a human-centered analysis method for practitioners to better understand and use preference data.