AutoLike: Auditing Social Media Recommendations through User Interactions

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Social media recommendation systems implicitly propagate harmful content—such as self-harm and eating disorder–related material—posing serious public health risks. Method: We propose AutoLike, the first black-box, interactive, automated framework that models recommendation system auditing as a reinforcement learning (RL) task. Without requiring API access or platform permissions, AutoLike simulates user “like” behaviors to actively steer recommendation feeds, thereby triggering and identifying nine categories of sensitive topics and their sentiment orientations. It integrates user behavioral modeling, fine-grained topic and sentiment classification, and RL-based policy optimization. Contribution/Results: Evaluated across eight experiments on TikTok, AutoLike increases exposure rates of targeted harmful topics by up to 7.3× while maintaining high stability. The framework provides a reproducible, scalable, and open-source tool for platform governance and algorithmic auditing, advancing transparency and accountability in recommender systems.

Technology Category

Application Category

📝 Abstract
Modern social media platforms, such as TikTok, Facebook, and YouTube, rely on recommendation systems to personalize content for users based on user interactions with endless streams of content, such as"For You"pages. However, these complex algorithms can inadvertently deliver problematic content related to self-harm, mental health, and eating disorders. We introduce AutoLike, a framework to audit recommendation systems in social media platforms for topics of interest and their sentiments. To automate the process, we formulate the problem as a reinforcement learning problem. AutoLike drives the recommendation system to serve a particular type of content through interactions (e.g., liking). We apply the AutoLike framework to the TikTok platform as a case study. We evaluate how well AutoLike identifies TikTok content automatically across nine topics of interest; and conduct eight experiments to demonstrate how well it drives TikTok's recommendation system towards particular topics and sentiments. AutoLike has the potential to assist regulators in auditing recommendation systems for problematic content. (Warning: This paper contains qualitative examples that may be viewed as offensive or harmful.)
Problem

Research questions and friction points this paper is trying to address.

Auditing social media recommendation systems
Identifying problematic content automatically
Driving recommendations towards specific topics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning automates auditing
AutoLike drives content recommendations
Evaluates TikTok topic sentiment identification
🔎 Similar Papers
No similar papers found.