Algorithmic resolution of crowd-sourced moderation on X in polarized settings across countries

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study examines the content moderation efficacy of X’s Community Notes system across 13 politically polarized countries, addressing whether it can mitigate polarization, safeguard civic discourse, and uphold electoral integrity. Method: Leveraging 1.9 million moderation notes and 135 million user ratings, we construct a cross-national ideological scale and apply latent variable modeling alongside causal robustness analysis. Contribution/Results: This is the first empirical test of ideological modeling generalizability for a globally deployed crowdsourced moderation system. Results show that while the system reliably identifies dominant polarization dimensions in each country, its reliance on “cross-ideological consensus” leads to significantly reduced moderation success rates for highly polarized content—revealing a structural failure risk. These findings expose a fundamental limitation of global crowdsourced moderation in multi-polarized contexts and provide critical empirical evidence for platform governance and democratic resilience research.

Technology Category

Application Category

📝 Abstract
Social platforms increasingly transition from expert fact-checking to crowd-sourced moderation, with X pioneering this shift through its Community Notes system, enabling users to collaboratively moderate misleading content. To resolve conflicting moderation, Community Notes learns a latent ideological dimension and selects notes garnering cross-partisan support. As this system, designed for and evaluated in the United States, is now deployed worldwide, we evaluate its operation across diverse polarization contexts. We analyze 1.9 million moderation notes with 135 million ratings from 1.2 million users, cross-referencing ideological scaling data across 13 countries. Our results show X's Community Notes effectively captures each country's main polarizing dimension but fails by design to moderate the most polarizing content, posing potential risks to civic discourse and electoral processes.
Problem

Research questions and friction points this paper is trying to address.

Evaluates crowd-sourced moderation effectiveness in polarized global contexts
Assesses cross-partisan note selection for misleading content resolution
Identifies failure to moderate highly polarizing content risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses crowd-sourced moderation for content evaluation
Learns latent ideological dimension from user ratings
Selects notes with cross-partisan support globally
🔎 Similar Papers
No similar papers found.
P
Paul Bouchaud
Complex Systems Institute of Paris Ile-de-France CNRS, Paris, France; CAMS EHESS, Paris, France; m´edialab, Sciences Po, Paris, France
Pedro Ramaciotti
Pedro Ramaciotti
Complex Systems Institute of Paris & médialab, Sciences Po
computational social sciencescomplex networksrecommender systemssocial network analysis