🤖 AI Summary
This study investigates users’ privacy perceptions and attitudes toward sensitive attribute inference—such as location and demographic characteristics—from social media videos by vision-language models (VLMs). Employing semi-structured interviews with 17 participants, it is the first empirical investigation into users’ understanding of VLMs’ video analysis capabilities, associated privacy concerns, and mitigation strategies. Findings reveal widespread apprehension regarding misuse, pervasive surveillance, and re-identification risks; although users attempt behavioral avoidance, they report a pronounced lack of agency and effective control. Participants strongly advocate for enhanced platform transparency regarding VLM functionalities and the integration of privacy-by-design mechanisms. The study bridges a critical gap in VLM privacy research by centering user perspectives, thereby providing essential human-centered evidence to inform responsible AI governance frameworks for video-based systems.
📝 Abstract
The rapid advancement of Visual Language Models (VLMs) has enabled sophisticated analysis of visual content, leading to concerns about the inference of sensitive user attributes and subsequent privacy risks. While technical capabilities of VLMs are increasingly studied, users' understanding, perceptions, and reactions to these inferences remain less explored, especially concerning videos uploaded on the social media. This paper addresses this gap through a semi-structured interview (N=17), investigating user perspectives on VLM-driven sensitive attribute inference from their visual data. Findings reveal that users perceive VLMs as capable of inferring a range of attributes, including location, demographics, and socioeconomic indicators, often with unsettling accuracy. Key concerns include unauthorized identification, misuse of personal information, pervasive surveillance, and harm from inaccurate inferences. Participants reported employing various mitigation strategies, though with skepticism about their ultimate effectiveness against advanced AI. Users also articulate clear expectations for platforms and regulators, emphasizing the need for enhanced transparency, user control, and proactive privacy safeguards. These insights are crucial for guiding the development of responsible AI systems, effective privacy-enhancing technologies, and informed policymaking that aligns with user expectations and societal values.