Bytes of a Feather: Personality and Opinion Alignment Effects in Human-AI Interaction

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how AI assistants’ personality traits (e.g., extraversion vs. introversion) and ideological stances (e.g., political orientation, values) jointly shape user preferences and perceptions. A large-scale controlled experiment with 1,000 participants employed machine learning–driven AI personalization simulations to systematically assess the effects of opinion alignment and personality congruence on perceived trustworthiness, competence, likability, and persuasiveness. Results demonstrate that ideological alignment is the dominant driver of user preference and trust—significantly enhancing perceived credibility, competence, and persuasive efficacy—strongly supporting the “similarity-attraction” hypothesis. In contrast, personality matching exerts only marginal, domain-specific effects, with negligible impact across most outcome measures. These findings delineate critical boundaries in AI personalization design: ideological alignment exhibits robust, generalizable effectiveness, whereas personality alignment shows limited utility and potential unintended consequences. The study thus provides empirical grounding and theoretical refinement for designing trustworthy, human-centered AI interactions.

Technology Category

Application Category

📝 Abstract
Interactions with AI assistants are increasingly personalized to individual users. As AI personalization is dynamic and machine-learning-driven, we have limited understanding of how personalization affects interaction outcomes and user perceptions. We conducted a large-scale controlled experiment in which 1,000 participants interacted with AI assistants that took on certain personality traits and opinion stances. Our results show that participants consistently preferred to interact with models that shared their opinions. Participants also found opinion-aligned models more trustworthy, competent, warm, and persuasive, corroborating an AI-similarity-attraction hypothesis. In contrast, we observed no or only weak effects of AI personality alignment, with introvert models rated as less trustworthy and competent by introvert participants. These findings highlight opinion alignment as a central dimension of AI personalization and user preference, while underscoring the need for a more grounded discussion of the limits and risks of personalized AI.
Problem

Research questions and friction points this paper is trying to address.

Investigates how AI personality and opinion alignment affect user preferences
Examines whether opinion-aligned AI models are perceived as more trustworthy
Explores the limited effects of personality matching compared to opinion alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI assistants aligned with user opinions
Personality alignment shows weak effects on trust
Controlled experiment tests AI similarity attraction hypothesis
🔎 Similar Papers
No similar papers found.