The Pervasive Blind Spot: Benchmarking VLM Inference Risks on Everyday Personal Videos

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the privacy risks posed by vision-language models (VLMs) in inferring sensitive personal information—such as age, occupation, and health status—from everyday personal videos. Method: Leveraging a crowdsourced dataset of 508 authentic personal videos, we conduct a benchmark evaluation of mainstream VLMs using human baseline experiments and multi-dimensional prompting strategies. Contribution/Results: We find that VLMs significantly outperform humans in sensitive attribute inference (average +23.6% accuracy), with performance critically dependent on temporal behavioral modeling rather than static object recognition. Common objects frequently act as misleading confounders, inducing erroneous attributions. Moreover, model-generated explanations exhibit extremely low alignment with actual reasoning grounds (mean IoU < 0.18). These findings expose the “black-box reasoning” privacy vulnerability of VLMs in personal video contexts, providing empirical foundations and methodological insights for developing trustworthy VLMs and privacy-preserving mechanisms.

Technology Category

Application Category

📝 Abstract
The proliferation of Vision-Language Models (VLMs) introduces profound privacy risks from personal videos. This paper addresses the critical yet unexplored inferential privacy threat, the risk of inferring sensitive personal attributes over the data. To address this gap, we crowdsourced a dataset of 508 everyday personal videos from 58 individuals. We then conducted a benchmark study evaluating VLM inference capabilities against human performance. Our findings reveal three critical insights: (1) VLMs possess superhuman inferential capabilities, significantly outperforming human evaluators, leveraging a shift from object recognition to behavioral inference from temporal streams. (2) Inferential risk is strongly correlated with factors such as video characteristics and prompting strategies. (3) VLM-driven explanation towards the inference is unreliable, as we revealed a disconnect between the model-generated explanations and evidential impact, identifying ubiquitous objects as misleading confounders.
Problem

Research questions and friction points this paper is trying to address.

Evaluating VLM privacy risks from inferring sensitive personal attributes
Benchmarking superhuman VLM inference capabilities against human performance
Analyzing factors affecting inferential risks and unreliable model explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarked VLM inference risks using crowdsourced personal videos
Revealed superhuman VLM capabilities in behavioral inference
Identified unreliable explanations and misleading object confounders
🔎 Similar Papers
No similar papers found.
Shuning Zhang
Shuning Zhang
Tsinghua University
HCIUsable Privacy and SecurityAI
Zhaoxin Li
Zhaoxin Li
Georgia Institute of Technology
Robot LearningExplainable Artificial Intelligence
C
Changxi Wen
Tsinghua University, China
Y
Ying Ma
School of Computing and Information Systems, University of Melbourne, Australia
S
Simin Li
School of Electronic and Information Engineering, Beihang University, China
G
Gengrui Zhang
Zhili College, Tsinghua University, China
Z
Ziyi Zhang
School of Information Sciences, University of Illinois at Urbana-Champaign, United States
Y
Yibo Meng
Tsinghua University, China
Hantao Zhao
Hantao Zhao
Southeast University
Human-computer interactionvirtual reality
X
Xin Yi
Tsinghua University, China
H
Hewu Li
Tsinghua University, China