"Impressively Scary:"Exploring User Perceptions and Reactions to Unraveling Machine Learning Models in Social Media Applications

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a critical transparency gap in locally executed machine learning models within social media applications—such as real-time face filters—where users remain unaware of model activation timing, execution location, and sensitive attribute inference. Employing a mixed-methods empirical approach—including model reverse-engineering, screen logging, semi-structured interviews, and two-week behavioral tracking—we provide the first in-situ characterization of opaque ML model behavior on end-user devices. Among 21 participants, eight adopted persistent privacy-preserving behaviors (e.g., disabling cameras, avoiding filters) due to perceived model invisibility. Results demonstrate that enhancing runtime observability of ML models significantly improves users’ sense of control and trust. Based on these findings, we propose “runtime ML transparency” as a novel user-centered paradigm, offering both theoretical foundations and actionable design principles for privacy-enhancing AI systems.

Technology Category

Application Category

📝 Abstract
Machine learning models deployed locally on social media applications are used for features, such as face filters which read faces in-real time, and they expose sensitive attributes to the apps. However, the deployment of machine learning models, e.g., when, where, and how they are used, in social media applications is opaque to users. We aim to address this inconsistency and investigate how social media user perceptions and behaviors change once exposed to these models. We conducted user studies (N=21) and found that participants were unaware to both what the models output and when the models were used in Instagram and TikTok, two major social media platforms. In response to being exposed to the models' functionality, we observed long term behavior changes in 8 participants. Our analysis uncovers the challenges and opportunities in providing transparency for machine learning models that interact with local user data.
Problem

Research questions and friction points this paper is trying to address.

Explore user perceptions of opaque machine learning models in social media.
Investigate behavior changes after exposing users to model functionality.
Address challenges in providing transparency for local data-interacting models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

User studies reveal ML model opacity
Behavior changes post ML model exposure
Challenges in ML model transparency explored
🔎 Similar Papers
No similar papers found.