🤖 AI Summary
This study examines the content moderation efficacy of X’s Community Notes system across 13 politically polarized countries, addressing whether it can mitigate polarization, safeguard civic discourse, and uphold electoral integrity. Method: Leveraging 1.9 million moderation notes and 135 million user ratings, we construct a cross-national ideological scale and apply latent variable modeling alongside causal robustness analysis. Contribution/Results: This is the first empirical test of ideological modeling generalizability for a globally deployed crowdsourced moderation system. Results show that while the system reliably identifies dominant polarization dimensions in each country, its reliance on “cross-ideological consensus” leads to significantly reduced moderation success rates for highly polarized content—revealing a structural failure risk. These findings expose a fundamental limitation of global crowdsourced moderation in multi-polarized contexts and provide critical empirical evidence for platform governance and democratic resilience research.
📝 Abstract
Social platforms increasingly transition from expert fact-checking to crowd-sourced moderation, with X pioneering this shift through its Community Notes system, enabling users to collaboratively moderate misleading content. To resolve conflicting moderation, Community Notes learns a latent ideological dimension and selects notes garnering cross-partisan support. As this system, designed for and evaluated in the United States, is now deployed worldwide, we evaluate its operation across diverse polarization contexts. We analyze 1.9 million moderation notes with 135 million ratings from 1.2 million users, cross-referencing ideological scaling data across 13 countries. Our results show X's Community Notes effectively captures each country's main polarizing dimension but fails by design to moderate the most polarizing content, posing potential risks to civic discourse and electoral processes.