🤖 AI Summary
Existing value assessment benchmarks rely on human or automated annotations, introducing annotation bias and lacking ecological validity in real human–AI interaction contexts. This paper introduces the first value assessment benchmark integrating high ecological validity with psychometric validation: it operationalizes value alignment via user self-referential value similarity ratings grounded in Schwartz’s ten-value theory, enabling quantitative LLM value profiling within authentic interactive scenarios; it further proposes a novel benchmark selection paradigm based on the correlation between users’ self-reported values and empirically measured value scores. Evaluated across 27 mainstream LLMs, results reveal systematic value preferences—strong alignment with Benevolence, Security, and Self-Direction, but attenuation of Tradition, Power, and Achievement. Critically, this work provides the first systematic evidence of demographic-group-level value perception biases across gender and age dimensions.
📝 Abstract
The importance of benchmarks for assessing the values of language models has been pronounced due to the growing need of more authentic, human-aligned responses. However, existing benchmarks rely on human or machine annotations that are vulnerable to value-related biases. Furthermore, the tested scenarios often diverge from real-world contexts in which models are commonly used to generate text and express values. To address these issues, we propose the Value Portrait benchmark, a reliable framework for evaluating LLMs' value orientations with two key characteristics. First, the benchmark consists of items that capture real-life user-LLM interactions, enhancing the relevance of assessment results to real-world LLM usage and thus ecological validity. Second, each item is rated by human subjects based on its similarity to their own thoughts, and correlations between these ratings and the subjects' actual value scores are derived. This psychometrically validated approach ensures that items strongly correlated with specific values serve as reliable items for assessing those values. Through evaluating 27 LLMs with our benchmark, we find that these models prioritize Benevolence, Security, and Self-Direction values while placing less emphasis on Tradition, Power, and Achievement values. Also, our analysis reveals biases in how LLMs perceive various demographic groups, deviating from real human data.