Sword and Shield: Uses and Strategies of LLMs in Navigating Disinformation

📅 2025-06-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit a dual role in the misinformation ecosystem—both as enablers of highly deceptive content generation and as tools for detection and governance. Method: We design a controlled multi-agent online forum simulation inspired by the “Werewolf” game mechanism, involving three strategic roles—misinformation propagators, platform moderators, and ordinary users—and analyze their LLM usage via role-driven communication games, qualitative behavioral coding, and LLM-augmented response modeling. Contribution/Results: This study provides the first empirical evidence of role-dependent LLM usage patterns in a controlled multi-agent setting: distinct strategies emerge across roles in prompt engineering, functionality invocation, and outcome efficacy. We propose a “sword-and-shield” co-governance framework, offering empirically grounded insights and methodological guidance for optimizing platform risk control and enhancing user media literacy.

Technology Category

Application Category

📝 Abstract
The emergence of Large Language Models (LLMs) presents a dual challenge in the fight against disinformation. These powerful tools, capable of generating human-like text at scale, can be weaponised to produce sophisticated and persuasive disinformation, yet they also hold promise for enhancing detection and mitigation strategies. This paper investigates the complex dynamics between LLMs and disinformation through a communication game that simulates online forums, inspired by the game Werewolf, with 25 participants. We analyse how Disinformers, Moderators, and Users leverage LLMs to advance their goals, revealing both the potential for misuse and combating disinformation. Our findings highlight the varying uses of LLMs depending on the participants' roles and strategies, underscoring the importance of understanding their effectiveness in this context. We conclude by discussing implications for future LLM development and online platform design, advocating for a balanced approach that empowers users and fosters trust while mitigating the risks of LLM-assisted disinformation.
Problem

Research questions and friction points this paper is trying to address.

LLMs' dual role in creating and combating disinformation
Analyzing LLM use by different roles in simulated forums
Balancing LLM benefits and risks in online platforms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulates online forums using Werewolf-inspired game
Analyzes LLM roles in disinformation and moderation
Advocates balanced LLM use to mitigate risks
🔎 Similar Papers
No similar papers found.
Gionnieve Lim
Gionnieve Lim
Singapore University of Technology and Design (SUTD)
B
Bryan Chen Zhengyu Tan
Singapore University of Technology and Design, Singapore
Kellie Yu Hui Sim
Kellie Yu Hui Sim
Student at Singapore University of Technology and Design
Human-AI InteractionMental HealthHuman-Computer InteractionSocial ComputingAI
W
Weiyan Shi
Singapore University of Technology and Design, Singapore
M
Ming Hui Chew
Singapore University of Technology and Design, Singapore
Ming Shan Hee
Ming Shan Hee
Singapore University of Technology and Design
multimodalhate speechmemes
Roy Ka-Wei Lee
Roy Ka-Wei Lee
Singapore University of Technology and Design
Trust and SafetySocial ComputingComputational Social ScienceNatural Language Processing
S
S. Perrault
Singapore University of Technology and Design, Singapore
K
K. Choo
Singapore University of Technology and Design, Singapore