Privacy-Preserving Behaviour of Chatbot Users: Steering Through Trust Dynamics

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the mechanism underlying the disconnect between users’ privacy risk perception and protective behaviors in chatbot interactions, particularly in contexts lacking immediate threat cues. Method: We developed a novel “privacy-security” experimental paradigm integrating response coding (quantitative) with semantic content analysis (qualitative), while ensuring participant anonymity and strict data non-sharing protocols. Contribution/Results: We found that 76% of users lack foundational privacy risk awareness, and 27% are unaware of how their data is processed. Older adults expressed concerns about data monetization, while even expert users exhibited systematic gaps in protective practices. Critically, we identified— for the first time—a paradoxical inverse relationship: users with higher privacy knowledge demonstrated *lower* protective behavior, moderated significantly by trust perceptions. These findings challenge the conventional cognition–behavior consistency assumption and offer both theoretical refinement and empirical grounding for privacy governance in AI-mediated interactions.

Technology Category

Application Category

📝 Abstract
Introduction: The use of chatbots is becoming increasingly important across various aspects of daily life. However, the privacy concerns associated with these communications have not yet been thoroughly addressed. The aim of this study was to investigate user awareness of privacy risks in chatbot interactions, the privacy-preserving behaviours users practice, and how these behaviours relate to their awareness of privacy threats, even when no immediate threat is perceived. Methods: We developed a novel"privacy-safe"setup to analyse user behaviour under the guarantees of anonymization and non-sharing. We employed a mixed-methods approach, starting with the quantification of broader trends by coding responses, followed by conducting a qualitative content analysis to gain deeper insights. Results: Overall, there was a substantial lack of understanding among users about how chatbot providers handle data (27% of the participants) and the basics of privacy risks (76% of the participants). Older users, in particular, expressed fears that chatbot providers might sell their data. Moreover, even users with privacy knowledge do not consistently exhibit privacy-preserving behaviours when assured of transparent data processing by chatbots. Notably, under-protective behaviours were observed among more expert users. Discussion: These findings highlight the need for a strategic approach to enhance user education on privacy concepts to ensure informed decision when interacting with chatbot technology. This includes the development of tools to help users monitor and control the information they share with chatbots
Problem

Research questions and friction points this paper is trying to address.

Investigates user awareness of chatbot privacy risks
Examines privacy-preserving behaviors in chatbot interactions
Analyzes trust dynamics in data sharing with chatbots
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privacy-safe setup for anonymized user analysis
Mixed-methods approach combining quantitative and qualitative analysis
Tools for monitoring and controlling shared information
🔎 Similar Papers
No similar papers found.
Julia Ive
Julia Ive
University College London
V
Vishal Yadav
Queen Mary University of London, School of Electronic Engineering and Computer Science, London, UK
M
Mariia Ignashina
Queen Mary University of London, School of Electronic Engineering and Computer Science, London, UK
M
Matthew Rand
Queen Mary University of London, School of Electronic Engineering and Computer Science, London, UK
Paulina Bondaronek
Paulina Bondaronek
University College London
digital healthevaluationnatural language processing