🤖 AI Summary
This study investigates the mechanism underlying the disconnect between users’ privacy risk perception and protective behaviors in chatbot interactions, particularly in contexts lacking immediate threat cues. Method: We developed a novel “privacy-security” experimental paradigm integrating response coding (quantitative) with semantic content analysis (qualitative), while ensuring participant anonymity and strict data non-sharing protocols. Contribution/Results: We found that 76% of users lack foundational privacy risk awareness, and 27% are unaware of how their data is processed. Older adults expressed concerns about data monetization, while even expert users exhibited systematic gaps in protective practices. Critically, we identified— for the first time—a paradoxical inverse relationship: users with higher privacy knowledge demonstrated *lower* protective behavior, moderated significantly by trust perceptions. These findings challenge the conventional cognition–behavior consistency assumption and offer both theoretical refinement and empirical grounding for privacy governance in AI-mediated interactions.
📝 Abstract
Introduction: The use of chatbots is becoming increasingly important across various aspects of daily life. However, the privacy concerns associated with these communications have not yet been thoroughly addressed. The aim of this study was to investigate user awareness of privacy risks in chatbot interactions, the privacy-preserving behaviours users practice, and how these behaviours relate to their awareness of privacy threats, even when no immediate threat is perceived. Methods: We developed a novel"privacy-safe"setup to analyse user behaviour under the guarantees of anonymization and non-sharing. We employed a mixed-methods approach, starting with the quantification of broader trends by coding responses, followed by conducting a qualitative content analysis to gain deeper insights. Results: Overall, there was a substantial lack of understanding among users about how chatbot providers handle data (27% of the participants) and the basics of privacy risks (76% of the participants). Older users, in particular, expressed fears that chatbot providers might sell their data. Moreover, even users with privacy knowledge do not consistently exhibit privacy-preserving behaviours when assured of transparent data processing by chatbots. Notably, under-protective behaviours were observed among more expert users. Discussion: These findings highlight the need for a strategic approach to enhance user education on privacy concepts to ensure informed decision when interacting with chatbot technology. This includes the development of tools to help users monitor and control the information they share with chatbots