Impact of Stricter Content Moderation on Parler's Users' Discourse

📅 2023-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the impact of Parler’s stricter content moderation—specifically targeting hate speech—implemented after its January 2021 shutdown and relaunch, on user discourse toxicity and information ecosystem quality. Leveraging a novel longitudinal dataset of 17 million posts, we employ a quasi-experimental design with time-series breakpoint analysis, multi-threshold toxicity quantification via Perspective API, news source credibility classification, and causal inference methods. Results show that enhanced moderation significantly and persistently reduced high-toxicity content (toxicity score > 0.5), but had no statistically significant effect on mild insults or threats. Concurrently, it improved the factual reliability of shared news, markedly decreasing dissemination from conspiracy-theory and pseudoscientific sources. This work provides the first large-scale, high-temporal-resolution, methodologically rigorous empirical evaluation of social media platform moderation policy effectiveness.
📝 Abstract
Social media platforms employ various content moderation techniques to remove harmful, offensive, and hate speech content. The moderation level varies across platforms; even over time, it can evolve in a platform. For example, Parler, a fringe social media platform popular among conservative users, was known to have the least restrictive moderation policies, claiming to have open discussion spaces for their users. However, after linking the 2021 US Capitol Riots and the activity of some groups on Parler, such as QAnon and Proud Boys, on January 12, 2021, Parler was removed from the Apple and Google App Store and suspended from Amazon Cloud hosting service. Parler would have to modify their moderation policies to return to these online stores. After a month of downtime, Parler was back online with a new set of user guidelines, which reflected stricter content moderation, especially regarding the emph{hate speech} policy. In this paper, we studied the moderation changes performed by Parler and their effect on the toxicity of its content. We collected a large longitudinal Parler dataset with 17M parleys from 432K active users from February 2021 to January 2022, after its return to the Internet and App Store. To the best of our knowledge, this is the first study investigating the effectiveness of content moderation techniques using data-driven approaches and also the first Parler dataset after its brief hiatus. Our quasi-experimental time series analysis indicates that after the change in Parler's moderation, the severe forms of toxicity (above a threshold of 0.5) immediately decreased and sustained. In contrast, the trend did not change for less severe threats and insults (a threshold between 0.5 - 0.7). Finally, we found an increase in the factuality of the news sites being shared, as well as a decrease in the number of conspiracy or pseudoscience sources being shared.
Problem

Research questions and friction points this paper is trying to address.

Impact of stricter moderation on Parler's discourse
Effectiveness of content moderation on toxicity
Changes in shared content factuality post-moderation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-driven content moderation analysis
Longitudinal Parler dataset collection
Quasi-experimental time series analysis
🔎 Similar Papers
No similar papers found.