🤖 AI Summary
This study investigates the relationship between ephemeral anonymous “throwaway” accounts on Reddit and content policy violations, as well as platform moderation mechanisms. Leveraging large-scale server logs, we employ statistical modeling and causal inference to systematically examine multidimensional associations among account type, posting behavior, and moderation outcomes. Results show that throwaway accounts exhibit significantly higher violation rates and elevated content removal rates; however, their moderation response patterns—including proportions of human versus automated intervention and processing latency—exhibit no statistically significant differences from regular accounts. This constitutes the first empirical evidence that anonymity per se does not inherently increase manual review burden, and that existing automated moderation systems remain robust in handling throwaway-generated content. These findings challenge the prevailing assumption that anonymity inherently exacerbates governance difficulty, offering data-driven support for optimizing platform identity policies and moderation resource allocation.
📝 Abstract
Social media platforms (SMPs) facilitate information sharing across varying levels of sensitivity. A crucial design decision for SMP administrators is the platform's identity policy, with some opting for real-name systems while others allow anonymous participation. Content moderation on these platforms is conducted by both humans and automated bots. This paper examines the relationship between anonymity, specifically through the use of ``throwaway'' accounts, and the extent and nature of content moderation on Reddit. Our findings indicate that content originating from anonymous throwaway accounts is more likely to violate rules on Reddit. Thus, they are more likely to be removed by moderation than standard pseudonymous accounts. However, the moderation actions applied to throwaway accounts are consistent with those applied to ordinary accounts, suggesting that the use of anonymous accounts does not necessarily necessitate increased human moderation. We conclude by discussing the implications of these findings for identity policies and content moderation strategies on SMPs.