Privacy Leakage Overshadowed by Views of AI: A Study on Human Oversight of Privacy in Language Model Agent

📅 2024-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a “fluency bias” induced by language model (LM) agents in intergenerational communication: users prioritize response fluency at the expense of privacy vigilance, increasing harmful privacy disclosure rates from 15.7% to 55.0%. Based on a 300-participant online task-based experiment, augmented by behavioral response analysis and clustering modeling, we first propose six interpretable privacy user archetypes—capturing heterogeneity in privacy concern, trust, and preference. We then design a bidirectional privacy alignment mechanism to calibrate user trust with agent behavior. Our contributions are threefold: (1) empirical validation of fluency as a systematic cognitive bias impairing privacy judgment; (2) the first privacy persona framework tailored to LM-agent interaction contexts; and (3) a deployable interaction design paradigm and evaluation benchmark for privacy-aware intelligent agents.

Technology Category

Application Category

📝 Abstract
Language model (LM) agents that act on users' behalf for personal tasks (e.g., replying emails) can boost productivity, but are also susceptible to unintended privacy leakage risks. We present the first study on people's capacity to oversee the privacy implications of the LM agents. By conducting a task-based survey (N=300), we investigate how people react to and assess the response generated by LM agents for asynchronous interpersonal communication tasks, compared with a response they wrote. We found that people may favor the agent response with more privacy leakage over the response they drafted or consider both good, leading to an increased harmful disclosure from 15.7% to 55.0%. We further identified six privacy profiles to characterize distinct patterns of concerns, trust, and privacy preferences in LM agents. Our findings shed light on designing agentic systems that enable privacy-preserving interactions and achieve bidirectional alignment on privacy preferences to help users calibrate trust.
Problem

Research questions and friction points this paper is trying to address.

Privacy Leakage
Language Intelligent Assistants
Privacy Protection and User Convenience Balance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privacy Protection
AI Assistants
User Trust
🔎 Similar Papers
No similar papers found.