π€ AI Summary
This study addresses the underexplored risks associated with frequent updates of social AI chatbots, particularly concerning user psychological well-being and addiction potential. Leveraging over 210,000 negative reviews of Character AI from Google Play, this work pioneers an integrative analysis that maps app version iterations to large-scale real-world user feedback. Through version-aware thematic modeling and linguistic pattern mining, the research reveals how specific system updates dynamically influence usersβ psychological perceptions. Findings indicate that certain releases significantly intensified negative sentiment, primarily due to technical malfunctions, and coincided with a marked rise in user expressions of emotional distress and psychological dependency. These results offer novel empirical evidence and a human-centered perspective for evaluating the socio-emotional impacts of evolving AI systems.
π Abstract
Artificial Intelligence (AI) chatbots are increasingly used for emotional, creative, and social support, leading to sustained and routine user interaction with these systems. As these applications evolve through frequent version updates, changes in functionality or behavior may influence how users evaluate them. However, work on how publicly expressed user feedback varies across app versions in real-world deployment contexts is limited. This study analyzes 210,840 Google Play reviews of the chatbot application Character AI, linking each review to the app version active at the time of posting. We specifically examine negative reviews to study how version-level rating trends, and linguistic patterns reflect user experiences. Our results show that user ratings fluctuate across successive versions, with certain releases associated with stronger negative evaluations. Thematic analysis indicates that dissatisfaction is concentrated around recurring issues related to technical malfunctions and errors. A subset of reviews additionally frames these concerns in terms of potential psychological or addiction-related effects. The findings highlight how aggregate user evaluations and expressed concerns vary across software iterations and provide empirical insight into how update cycles relate to user feedback patterns and underscore the importance of stability and transparent communication in evolving AI systems.