🤖 AI Summary
Current personalized preference learning lacks a standardized, multidimensional evaluation framework—particularly overlooking fairness, safety, and adaptability under user value heterogeneity. This paper introduces the first comprehensive multidimensional evaluation framework for personalized preference learning, covering performance, fairness, safety, and adaptability, and proposes a novel evaluation paradigm that jointly accounts for fairness and safety. We conduct systematic benchmarking of eight state-of-the-art methods across three real-world preference datasets. Results reveal up to 36% performance disparity and up to 20% degradation in safety alignment under high-preference-divergence scenarios. The study empirically exposes critical limitations of existing approaches when preferences are highly polarized, establishing the first reproducible empirical benchmark and actionable improvement pathways. Our work advances the development of more inclusive, robust, and value-aligned personalized systems.
📝 Abstract
While Reinforcement Learning from Human Feedback (RLHF) is widely used to align Large Language Models (LLMs) with human preferences, it typically assumes homogeneous preferences across users, overlooking diverse human values and minority viewpoints. Although personalized preference learning addresses this by tailoring separate preferences for individual users, the field lacks standardized methods to assess its effectiveness. We present a multi-faceted evaluation framework that measures not only performance but also fairness, unintended effects, and adaptability across varying levels of preference divergence. Through extensive experiments comparing eight personalization methods across three preference datasets, we demonstrate that performance differences between methods could reach 36% when users strongly disagree, and personalization can introduce up to 20% safety misalignment. These findings highlight the critical need for holistic evaluation approaches to advance the development of more effective and inclusive preference learning systems.