🤖 AI Summary
This study investigates how mothers leverage large language models (LLMs) to obtain emotional support and parenting information while avoiding social judgment, thereby alleviating maternal anxiety and guilt. Through a 10-day mixed-methods survey (N=107) combining questionnaires and open-ended text analysis, the research reveals that mothers use LLMs as nonjudgmental resources for emotion regulation, contextual validation, and parenting decisions. It further identifies, for the first time, that social context—particularly co-residence with extended family—significantly influences LLM adoption, with mothers in such settings showing greater reliance on LLMs to circumvent interpersonal scrutiny. The findings propose a novel framework positioning LLMs as low-risk interactive aids that complement, rather than replace, human support: while most participants prefer quick LLM consultations to avoid social repercussions, over half still value the warmth of interpersonal connections.
📝 Abstract
In the age of Large Language Models (LLMs), much work has already been done on how LLMs support medication advice and serve as information providers; however, how mothers use these tools for emotional and informational support to avoid social judgment remains underexplored. In this study, we have conducted a 10-day mixed-methods exploratory survey (N=107) to investigate how mothers use LLMs as a non-judgmental resource for emotional support and regulation, as well as situational reassurance. Our findings show that mothers are asking LLMs various questions about childcare to reassure themselves and avoid judgment, particularly around childcare decisions, maternal guilt, and late-night caregiving. Open-ended responses also show that mothers are comfortable with LLMs because they do not have to think about social consequences or judgment. Although they use LLMs for quick information or reassurance to avoid judgment, the results also show that more than half of the participants value human warmth over LLMs; however, a significant minority, especially those who live in a joint family, consider LLMs to avoid human judgment. These findings help us understand how we can frame LLMs as low-risk interaction support rather than as a replacement for human support, and highlight the role of social context in shaping emotional technology use.