AI and My Values: User Perceptions of LLMs'Ability to Extract, Embody, and Explain Human Values from Casual Conversations

📅 2026-01-30
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models can extract, embody, and explain human values from everyday conversations, and how users perceive and trust such capabilities. Through a one-month human–AI dialogue experiment followed by semi-structured interviews, the authors developed a three-dimensional evaluation framework—encompassing value extraction, embodiment, and explanation—using their novel Values Alignment Perception Toolkit (VAPT). Findings from 13 participants indicate a general belief that AI systems can understand their values, that interactions foster self-reflection, and that users are readily persuaded by the AI’s reasoning. The work introduces the concept of “weaponized empathy” as an emerging risk and proposes a design framework for value-aligned conversational systems grounded in transparency, informed consent, and safety safeguards.

Technology Category

Application Category

📝 Abstract
Does AI understand human values? While this remains an open philosophical question, we take a pragmatic stance by introducing VAPT, the Value-Alignment Perception Toolkit, for studying how LLMs reflect people's values and how people judge those reflections. 20 participants texted a human-like chatbot over a month, then completed a 2-hour interview with our toolkit evaluating AI's ability to extract (pull details regarding), embody (make decisions guided by), and explain (provide proof of) human values. 13 participants left our study convinced that AI can understand human values. Participants found the experience insightful for self-reflection and found themselves getting persuaded by the AI's reasoning. Thus, we warn about"weaponized empathy": a potentially dangerous design pattern that may arise in value-aligned, yet welfare-misaligned AI. VAPT offers concrete artifacts and design implications to evaluate and responsibly build value-aligned conversational agents with transparency, consent, and safeguards as AI grows more capable and human-like into the future.
Problem

Research questions and friction points this paper is trying to address.

human values
value alignment
large language models
AI perception
weaponized empathy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Value Alignment
Large Language Models
Human-AI Interaction
Weaponized Empathy
Conversational Agents
🔎 Similar Papers
No similar papers found.