🤖 AI Summary
Current online reporting systems employ a platform-dominated inquisitorial model, depriving users of procedural transparency and participatory rights—thereby undermining procedural fairness and exacerbating privacy risks. This paper pioneers the systematic integration of adversarial procedural theory from comparative law into platform governance, proposing a user-empowered reporting architecture. It grants users substantive rights to lead evidence submission, present arguments, and conduct cross-examination, complemented by a minimal-information-sharing protocol and a verifiable evidence authentication scheme. Methodologically, the study combines legal-theoretical analysis, formative user interviews, threat modeling, and co-design of lightweight cryptographic tools with a cross-jurisdictional governance framework. The work rigorously delineates the design boundaries of user empowerment, delivering a deployable, privacy-by-design paradigm for online dispute resolution that significantly enhances perceived procedural justice and users’ data self-determination capabilities.
📝 Abstract
User reporting systems are central to addressing interpersonal conflicts and protecting users from harm in online spaces, particularly those with heightened privacy expectations. However, users often express frustration at their lack of insight and input into the reporting process. Drawing on offline legal literature, we trace these frustrations to the inquisitorial nature of today's online reporting systems, where moderators lead evidence gathering and case development. In contrast, adversarial models can grant users greater control and thus are better for procedural justice and privacy protection, despite their increased risks of system abuse. This motivates us to explore the potential of incorporating adversarial practices into online reporting systems. Through literature review, formative interviews, and threat modeling, we find a rich design space for empowering users to collect and present their evidence while mitigating potential abuse in the reporting process. In particular, we propose designs that minimize the amount of information shared for reporting purposes, as well as supporting evidence authentication. Finally, we discuss how our findings can inform new cryptographic tools and new efforts to apply comparative legal frameworks to online moderation.