Algorithmic Audit of Personalisation Drift in Polarising Topics on TikTok

📅 2026-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether TikTok’s personalized recommendation system exacerbates user polarization on contentious topics such as politics, climate change, and vaccines. By deploying controlled simulated accounts and integrating content annotation, longitudinal analysis of recommendation trajectories, and cross-topic comparisons, the research systematically distinguishes and empirically evaluates three distinct forms of personalization-driven drift: preference-aligned drift, issue-level polarization drift, and stance-level polarization drift. Findings reveal that TikTok’s algorithm exhibits marked topic dependency: it reinforces users’ preexisting political stances in U.S. political content while simultaneously promoting opposing viewpoints, yet demonstrates a neutralizing effect in the context of conspiracy-related material. These results underscore the highly contingent nature of algorithmic influence on polarization, varying significantly across thematic domains.

Technology Category

Application Category

📝 Abstract
Social media platforms have become an integral part of everyday life, serving as a primary source of news and information for many users. These platforms increasingly rely on personalised recommendation systems that shape what users see and engage with. While these systems are optimised for engagement, concerns have emerged that they may also drive users toward more polarised perspectives, particularly in contested domains such as politics, climate change, vaccines, and conspiracy theories. In this paper, we present an algorithmic audit of personalisation drift on TikTok in these polarising topics. Using controlled accounts designed to simulate users with interests aligned with or opposed to different polarising topics, we systematically measure the extent to which TikTok steers content exposure toward specific topics and polarities over time. Specifically, we investigated: 1) a preference-aligned drift (showing a strong personalisation towards user interests), 2) a polarisation-topic drift (showing a strong neutralising effect for misinformation-themed topics, and a high preference and reinforcement of interest of US politic topic); and 3) a polarisation-stance drift (showing a preference of oppose stance towards US politics topic and a general reinforcement of users' stance by recommending items aligned with their stance towards polarising topics). Overall, our findings provide evidence that recommendation trajectories differ markedly across topics, with some pathways amplifying polarised viewpoints more strongly than others and offer insights for platform governance, transparency and user awareness.
Problem

Research questions and friction points this paper is trying to address.

personalisation drift
polarising topics
recommendation systems
algorithmic audit
social media
Innovation

Methods, ideas, or system contributions that make the work stand out.

algorithmic audit
personalisation drift
recommendation systems
content polarisation
TikTok
🔎 Similar Papers
No similar papers found.
B
Branislav Pecher
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
A
Adrian Bindas
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
J
Jan Jakubcik
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
M
Matus Tuna
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
M
Matus Tibensky
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
S
Simon Liska
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
P
Peter Sakalik
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
A
Andrej Suty
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
M
Matej Mosnar
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
F
Filip Hossner
Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia
Ivan Srba
Ivan Srba
Kempelen Institute of Intelligent Technologies
AIMachine LearningNatural Language ProcessingSocial ComputingDisinformation