Resisting Humanization: Ethical Front-End Design Choices in AI for Sensitive Contexts

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the risks of excessive anthropomorphism in AI frontends deployed in sensitive contexts—such as supporting survivors of gender-based violence—which can misalign users’ mental models, engender misplaced trust, and undermine autonomy. Integrating human-computer interaction, natural language processing, and value-sensitive design, the work conceptualizes frontend interaction itself as a procedural ethical practice and advocates for restrained anthropomorphism guided by trauma-informed principles. Through empirical validation with Chayn, a nonprofit organization, the research demonstrates that deliberately avoiding anthropomorphic design effectively safeguards users’ psychological safety and agency. The findings offer an ethically grounded interaction paradigm for high-stakes AI applications that balances normative values with user needs.

Technology Category

Application Category

📝 Abstract
Ethical debates in AI have primarily focused on back-end issues such as data governance, model training, and algorithmic decision-making. Less attention has been paid to the ethical significance of front-end design choices, such as the interaction and representation-based elements through which users interact with AI systems. This gap is particularly significant for Conversational User Interfaces (CUI) based on Natural Language Processing (NLP) systems, where humanizing design elements such as dialogue-based interaction, emotive language, personality modes, and anthropomorphic metaphors are increasingly prevalent. This work argues that humanization in AI front-end design is a value-driven choice that profoundly shapes users' mental models, trust calibration, and behavioral responses. Drawing on research in human-computer interaction (HCI), conversational AI, and value-sensitive design, we examine how interfaces can play a central role in misaligning user expectations, fostering misplaced trust, and subtly undermining user autonomy, especially in vulnerable contexts. To ground this analysis, we discuss two AI systems developed by Chayn, a nonprofit organization supporting survivors of gender-based violence. Chayn is extremely cautious when building AI that interacts with or impacts survivors by operationalizing their trauma-informed design principles. This Chayn case study illustrates how ethical considerations can motivate principled restraint in interface design, challenging engagement-based norms in contemporary AI products. We argue that ethical front-end AI design is a form of procedural ethics, enacted through interaction choices rather than embedded solely in system logic.
Problem

Research questions and friction points this paper is trying to address.

humanization
ethical design
conversational AI
user autonomy
front-end design
Innovation

Methods, ideas, or system contributions that make the work stand out.

ethical front-end design
humanization
conversational AI
value-sensitive design
trauma-informed AI
🔎 Similar Papers
No similar papers found.
Silvia Rossi
Silvia Rossi
Full Professor in Computer Science at Università degli Studi di Napoli Federico II
Social RoboticsSocially Assistive RoboticsAdaptive BehaviorTrustworthy HRI
D
Diletta Huyskes
University of Milan, Immanence, Milan, Italy
M
Mackenzie Jorgensen
Northumbria University, Immanence, Newcastle, UK