🤖 AI Summary
This study investigates the motivations behind mothers’ shift from Facebook parenting groups to large language models (LLMs) such as ChatGPT and Gemini when seeking child-rearing support. Employing a mixed-methods approach—combining a cross-sectional survey with qualitative thematic analysis—the research reveals that 41.3% of mothers avoid online parenting communities due to anticipated social judgment, with residential location and family structure significantly influencing this behavior. Participants perceive LLMs as offering immediate, private, and socially low-risk avenues for both emotional reassurance and informational guidance. The findings highlight the emerging role of LLMs in providing emotionally safe and reliable support, suggesting that future AI system design should balance informational accuracy with emotional safety to better serve vulnerable user populations.
📝 Abstract
Social media platforms, especially Facebook parenting groups, have long been used as informal support networks for mothers seeking advice and reassurance. However, growing concerns about social judgment, privacy exposure, and unreliable information are changing how mothers seek help. This exploratory mixed-method study examines why mothers are moving from Facebook parenting groups to large language models such as ChatGPT and Gemini. We conducted a cross-sectional online survey of 109 mothers. Results show that 41.3% of participants avoided Facebook parenting groups because they expected judgment from others. This difference was statistically significant across location and family structure. Mothers living in their home country and those in joint families were more likely to avoid Facebook groups. Qualitative findings revealed three themes: social judgment and exposure, LLMs as safe and private spaces, and quick and structured support. Participants described LLMs as immediate, emotionally safe, and reliable alternatives that reduce social risk when asking for help. Rather than replacing human support, LLMs appear to fill emotional and practical gaps within existing support systems. These findings show a change in maternal digital support and highlight the need to design LLM systems that support both information and emotional safety.