"Who wants to be nagged by AI?": Investigating the Effects of Agreeableness on Older Adults' Perception of LLM-Based Voice Assistants' Explanations

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how the perceived explainability of large language model–driven voice assistants varies with their level of agreeableness among older adults, particularly across everyday versus emergency home scenarios. In an experiment with 70 older participants, we manipulated assistant agreeableness (high vs. low) and compared real-time environmental explanations against retrospective historical ones. Results indicate that highly agreeable assistants are trusted and preferred more in routine contexts, yet clarity supersedes warmth in emergencies. Real-time explanations consistently outperformed retrospective ones, and participants high in trait agreeableness rated low-agreeableness assistants significantly more negatively. These findings underscore the need for personalized explainability strategies that account for user personality, situational context, and individual characteristics, while also revealing that social tone and perceived competence operate as distinct dimensions—challenging the efficacy of one-size-fits-all explanation approaches.

Technology Category

Application Category

📝 Abstract
LLM-based voice assistants (VAs) increasingly support older adults aging in place, yet how an assistant's agreeableness shapes explanation perception remains underexplored. We conducted a study(N=70) examining how VA agreeableness influences older adults' perceptions of explanations across routine and emergency home scenarios. High-agreeableness assistants were perceived as more trustworthy, empathetic, and likable, but these benefits diminished in emergencies where clarity outweighed warmth. Agreeableness did not affect perceived intelligence, suggesting social tone and competence are separable dimensions. Real-time environmental explanations outperformed history-based ones, and agreeable older adults penalized low-agreeableness assistants more strongly. These findings show the need to move beyond a one-size-fits-all approach to AI explainability, while balancing personality, context, and audience.
Problem

Research questions and friction points this paper is trying to address.

agreeableness
older adults
LLM-based voice assistants
explanation perception
AI explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

agreeableness
LLM-based voice assistants
explainability
older adults
context-aware AI
🔎 Similar Papers
No similar papers found.