🤖 AI Summary
This study investigates whether TikTok’s personalized recommendation system exacerbates user polarization on contentious topics such as politics, climate change, and vaccines. By deploying controlled simulated accounts and integrating content annotation, longitudinal analysis of recommendation trajectories, and cross-topic comparisons, the research systematically distinguishes and empirically evaluates three distinct forms of personalization-driven drift: preference-aligned drift, issue-level polarization drift, and stance-level polarization drift. Findings reveal that TikTok’s algorithm exhibits marked topic dependency: it reinforces users’ preexisting political stances in U.S. political content while simultaneously promoting opposing viewpoints, yet demonstrates a neutralizing effect in the context of conspiracy-related material. These results underscore the highly contingent nature of algorithmic influence on polarization, varying significantly across thematic domains.
📝 Abstract
Social media platforms have become an integral part of everyday life, serving as a primary source of news and information for many users. These platforms increasingly rely on personalised recommendation systems that shape what users see and engage with. While these systems are optimised for engagement, concerns have emerged that they may also drive users toward more polarised perspectives, particularly in contested domains such as politics, climate change, vaccines, and conspiracy theories. In this paper, we present an algorithmic audit of personalisation drift on TikTok in these polarising topics. Using controlled accounts designed to simulate users with interests aligned with or opposed to different polarising topics, we systematically measure the extent to which TikTok steers content exposure toward specific topics and polarities over time. Specifically, we investigated: 1) a preference-aligned drift (showing a strong personalisation towards user interests), 2) a polarisation-topic drift (showing a strong neutralising effect for misinformation-themed topics, and a high preference and reinforcement of interest of US politic topic); and 3) a polarisation-stance drift (showing a preference of oppose stance towards US politics topic and a general reinforcement of users' stance by recommending items aligned with their stance towards polarising topics). Overall, our findings provide evidence that recommendation trajectories differ markedly across topics, with some pathways amplifying polarised viewpoints more strongly than others and offer insights for platform governance, transparency and user awareness.