Value Portrait: Understanding Values of LLMs with Human-aligned Benchmark

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing value assessment benchmarks rely on human or automated annotations, introducing annotation bias and lacking ecological validity in real human–AI interaction contexts. This paper introduces the first value assessment benchmark integrating high ecological validity with psychometric validation: it operationalizes value alignment via user self-referential value similarity ratings grounded in Schwartz’s ten-value theory, enabling quantitative LLM value profiling within authentic interactive scenarios; it further proposes a novel benchmark selection paradigm based on the correlation between users’ self-reported values and empirically measured value scores. Evaluated across 27 mainstream LLMs, results reveal systematic value preferences—strong alignment with Benevolence, Security, and Self-Direction, but attenuation of Tradition, Power, and Achievement. Critically, this work provides the first systematic evidence of demographic-group-level value perception biases across gender and age dimensions.

Technology Category

Application Category

📝 Abstract
The importance of benchmarks for assessing the values of language models has been pronounced due to the growing need of more authentic, human-aligned responses. However, existing benchmarks rely on human or machine annotations that are vulnerable to value-related biases. Furthermore, the tested scenarios often diverge from real-world contexts in which models are commonly used to generate text and express values. To address these issues, we propose the Value Portrait benchmark, a reliable framework for evaluating LLMs' value orientations with two key characteristics. First, the benchmark consists of items that capture real-life user-LLM interactions, enhancing the relevance of assessment results to real-world LLM usage and thus ecological validity. Second, each item is rated by human subjects based on its similarity to their own thoughts, and correlations between these ratings and the subjects' actual value scores are derived. This psychometrically validated approach ensures that items strongly correlated with specific values serve as reliable items for assessing those values. Through evaluating 27 LLMs with our benchmark, we find that these models prioritize Benevolence, Security, and Self-Direction values while placing less emphasis on Tradition, Power, and Achievement values. Also, our analysis reveals biases in how LLMs perceive various demographic groups, deviating from real human data.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM values with human-aligned benchmarks
Reducing value-related biases in benchmark annotations
Enhancing ecological validity of LLM value evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-life user-LLM interaction items enhance relevance
Human-rated item similarity ensures psychometric validation
Correlations derived between ratings and value scores
🔎 Similar Papers
No similar papers found.