🤖 AI Summary
This study tests the core proposition—“Does social validation incentivize online hate speech?”—grounded in Walther’s (2024) theory of social validation, focusing on two hypotheses: (H1a) whether social validation increases subsequent hate speech, and (H1b) whether it drives its radicalization.
Method: Leveraging 110 million posts from the Parler platform, we employ time-series analysis, fixed-effects modeling, and multilevel regression, integrated with fine-grained textual annotation and behavioral tracking.
Contribution/Results: Contrary to expectations, likes exhibit no significant positive association with subsequent hate speech; at the individual level, effects are negative or mixed. This is the first empirical demonstration that social validation operates differently in niche platforms versus mainstream ones—challenging the assumed universality of existing theories. We introduce the concept of “cross-platform mechanistic heterogeneity,” offering novel theoretical insight and actionable evidence for platform governance and theory refinement.
📝 Abstract
In this paper, we explored how online hate is motivated by receiving social approval from others. We specifically examined two central tenets of Walther's (2024) social approval theory of online hate: (H1a) more signals of social approval on hate messages predicts more subsequent hate messages, and (H1b) as social approval increases, hate speech messages become more extreme. Using over 110 million posts from Parler (2018-2021), we observed that the number of upvotes a person received on a hate speech post was unassociated with the amount of hate speech in their next post and posts during the next week, month, three months, and six months. Between-person effects revealed an average negative relationship between social approval and hate speech production at the post level, but this relationship was mixed at other time intervals. Social approval reinforcement mechanisms of online hate may operate differently on niche social media platforms.