Principles of Safe AI Companions for Youth: Parent and Expert Perspectives

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative AI companions for adolescents lack developmentally informed risk mitigation mechanisms, and significant discrepancies exist between parental and expert (child developmental psychology) perceptions of such risks. This study employs semi-structured interviews augmented by fine-grained analysis of authentic AI–adolescent dialogues to comparatively examine risk judgment logics across these two stakeholder groups. It introduces, for the first time, a dual-perspective, context-sensitive risk assessment framework and a dialogue-temporal pattern recognition mechanism. Key risk moderators identified include adolescent developmental maturity, AI agent age persona design, and value alignment modeling. Based on these findings, we propose a layered, dynamic intervention strategy. The work advances AI safety design from reactive incident management toward proactive developmental safeguarding, providing both theoretical foundations and actionable guidelines for designing age-appropriate, interpretable, and controllable generative AI companions for youth.

Technology Category

Application Category

📝 Abstract
AI companions are increasingly popular among teenagers, yet current platforms lack safeguards to address developmental risks and harmful normalization. Despite growing concerns, little is known about how parents and developmental psychology experts assess these interactions or what protections they consider necessary. We conducted 26 semi structured interviews with parents and experts, who reviewed real world youth GenAI companion conversation snippets. We found that stakeholders assessed risks contextually, attending to factors such as youth maturity, AI character age, and how AI characters modeled values and norms. We also identified distinct logics of assessment: parents flagged single events, such as a mention of suicide or flirtation, as high risk, whereas experts looked for patterns over time, such as repeated references to self harm or sustained dependence. Both groups proposed interventions, with parents favoring broader oversight and experts preferring cautious, crisis-only escalation paired with youth facing safeguards. These findings provide directions for embedding safety into AI companion design.
Problem

Research questions and friction points this paper is trying to address.

AI companions lack safeguards for youth developmental risks
Parents and experts assess AI risks through different contextual factors
Stakeholders propose distinct safety interventions for AI companion design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conducted interviews with parents and experts
Identified contextual risk factors for youth
Proposed oversight and crisis escalation safeguards
🔎 Similar Papers
No similar papers found.
Yaman Yu
Yaman Yu
University of Illinois Urbana-Champaign
usable privacy and securityhuman-computer interactionAccessibilityWeb3
M
Mohi
University of Illinois Urbana–Champaign, Urbana, IL, USA
A
Aishi Debroy
Swarthmore College, Swarthmore, PA, USA
X
Xin Cao
University of Illinois Urbana–Champaign, Urbana, IL, USA
K
Karen Rudolph
University of Illinois Urbana–Champaign, Urbana, IL, USA
Y
Yang Wang
University of Illinois Urbana–Champaign, Urbana, IL, USA