🤖 AI Summary
Large language models (LLMs) exhibit a dual role in the misinformation ecosystem—both as enablers of highly deceptive content generation and as tools for detection and governance. Method: We design a controlled multi-agent online forum simulation inspired by the “Werewolf” game mechanism, involving three strategic roles—misinformation propagators, platform moderators, and ordinary users—and analyze their LLM usage via role-driven communication games, qualitative behavioral coding, and LLM-augmented response modeling. Contribution/Results: This study provides the first empirical evidence of role-dependent LLM usage patterns in a controlled multi-agent setting: distinct strategies emerge across roles in prompt engineering, functionality invocation, and outcome efficacy. We propose a “sword-and-shield” co-governance framework, offering empirically grounded insights and methodological guidance for optimizing platform risk control and enhancing user media literacy.
📝 Abstract
The emergence of Large Language Models (LLMs) presents a dual challenge in the fight against disinformation. These powerful tools, capable of generating human-like text at scale, can be weaponised to produce sophisticated and persuasive disinformation, yet they also hold promise for enhancing detection and mitigation strategies. This paper investigates the complex dynamics between LLMs and disinformation through a communication game that simulates online forums, inspired by the game Werewolf, with 25 participants. We analyse how Disinformers, Moderators, and Users leverage LLMs to advance their goals, revealing both the potential for misuse and combating disinformation. Our findings highlight the varying uses of LLMs depending on the participants' roles and strategies, underscoring the importance of understanding their effectiveness in this context. We conclude by discussing implications for future LLM development and online platform design, advocating for a balanced approach that empowers users and fosters trust while mitigating the risks of LLM-assisted disinformation.