🤖 AI Summary
There is no causal evidence that global AI anthropomorphism universally enhances user trust and engagement, and existing safety frameworks neglect cultural diversity. Method: We conducted two large-scale, multinational randomized controlled trials across 10 countries (N = 3,500), integrating real-time conversational interaction and cross-culturally validated measurement instruments. Contribution/Results: We provide the first empirical evidence that while anthropomorphic design consistently strengthens users’ perception of AI as human-like, its effects on behavioral trust and engagement exhibit significant cultural heterogeneity—for instance, increasing trust among Brazilian users yet decreasing it among Japanese users. These findings challenge the universalist assumption that “more human-like equals more trustworthy” and propose a culture-moderated paradigm for human–AI interaction. The study delivers rigorous, cross-cultural empirical foundations for globally responsible AI ethics, design, and safety governance.
📝 Abstract
Over a billion users across the globe interact with AI systems engineered with increasing sophistication to mimic human traits. This shift has triggered urgent debate regarding Anthropomorphism, the attribution of human characteristics to synthetic agents, and its potential to induce misplaced trust or emotional dependency. However, the causal link between more humanlike AI design and subsequent effects on engagement and trust has not been tested in realistic human-AI interactions with a global user pool. Prevailing safety frameworks continue to rely on theoretical assumptions derived from Western populations, overlooking the global diversity of AI users. Here, we address these gaps through two large-scale cross-national experiments (N=3,500) across 10 diverse nations, involving real-time and open-ended interactions with an AI system. We find that when evaluating an AI's human-likeness, users focus less on the kind of theoretical aspects often cited in policy (e.g., sentience or consciousness), but rather applied, interactional cues like conversation flow or understanding the user's perspective. We also experimentally demonstrate that humanlike design levers can causally increase anthropomorphism among users; however, we do not find that humanlike design universally increases behavioral measures for user engagement and trust, as previous theoretical work suggests. Instead, part of the connection between human-likeness and behavioral outcomes is fractured by culture: specific design choices that foster self-reported trust in AI-systems in some populations (e.g., Brazil) may trigger the opposite result in others (e.g., Japan). Our findings challenge prevailing narratives of inherent risk in humanlike AI design. Instead, we identify a nuanced, culturally mediated landscape of human-AI interaction, which demands that we move beyond a one-size-fits-all approach in AI governance.