🤖 AI Summary
Generative AI companions for adolescents lack developmentally informed risk mitigation mechanisms, and significant discrepancies exist between parental and expert (child developmental psychology) perceptions of such risks. This study employs semi-structured interviews augmented by fine-grained analysis of authentic AI–adolescent dialogues to comparatively examine risk judgment logics across these two stakeholder groups. It introduces, for the first time, a dual-perspective, context-sensitive risk assessment framework and a dialogue-temporal pattern recognition mechanism. Key risk moderators identified include adolescent developmental maturity, AI agent age persona design, and value alignment modeling. Based on these findings, we propose a layered, dynamic intervention strategy. The work advances AI safety design from reactive incident management toward proactive developmental safeguarding, providing both theoretical foundations and actionable guidelines for designing age-appropriate, interpretable, and controllable generative AI companions for youth.
📝 Abstract
AI companions are increasingly popular among teenagers, yet current platforms lack safeguards to address developmental risks and harmful normalization. Despite growing concerns, little is known about how parents and developmental psychology experts assess these interactions or what protections they consider necessary. We conducted 26 semi structured interviews with parents and experts, who reviewed real world youth GenAI companion conversation snippets. We found that stakeholders assessed risks contextually, attending to factors such as youth maturity, AI character age, and how AI characters modeled values and norms. We also identified distinct logics of assessment: parents flagged single events, such as a mention of suicide or flirtation, as high risk, whereas experts looked for patterns over time, such as repeated references to self harm or sustained dependence. Both groups proposed interventions, with parents favoring broader oversight and experts preferring cautious, crisis-only escalation paired with youth facing safeguards. These findings provide directions for embedding safety into AI companion design.