Dynamic Simulation Framework for Disinformation Dissemination and Correction With Social Bots

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The mechanisms by which social bots influence misinformation dissemination and correction remain poorly understood; existing models oversimplify bot behavior and lack quantitative evaluation. Method: This paper proposes MADD, a Multi-Agent Dynamic Simulation framework integrating realistic user attributes, scale-free network topology (Barabási–Albert model), and community structure (stochastic block model). MADD is the first to explicitly model heterogeneous dynamic behaviors of malicious versus benign bots and supports differential simulation and quantitative assessment of two correction strategies—fact-based and narrative-based. Contribution/Results: Validated via individual- and population-level metrics, MADD demonstrates empirical fidelity in both topological and behavioral dimensions. It successfully reproduces six distinct misinformation propagation scenarios and reveals a critical insight: correction efficacy is jointly governed by network structure and users’ cognitive characteristics.

Technology Category

Application Category

📝 Abstract
In the human-bot symbiotic information ecosystem, social bots play key roles in spreading and correcting disinformation. Understanding their influence is essential for risk control and better governance. However, current studies often rely on simplistic user and network modeling, overlook the dynamic behavior of bots, and lack quantitative evaluation of correction strategies. To fill these gaps, we propose MADD, a Multi Agent based framework for Disinformation Dissemination. MADD constructs a more realistic propagation network by integrating the Barabasi Albert Model for scale free topology and the Stochastic Block Model for community structures, while designing node attributes based on real world user data. Furthermore, MADD incorporates both malicious and legitimate bots, with their controlled dynamic participation allows for quantitative analysis of correction strategies. We evaluate MADD using individual and group level metrics. We experimentally verify the real world consistency of MADD user attributes and network structure, and we simulate the dissemination of six disinformation topics, demonstrating the differential effects of fact based and narrative based correction strategies.
Problem

Research questions and friction points this paper is trying to address.

Study dynamic roles of social bots in disinformation spread and correction
Address simplistic modeling of user behavior and network dynamics
Quantitatively evaluate effectiveness of disinformation correction strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi Agent framework for disinformation analysis
Combines Barabasi Albert and Stochastic Block Models
Dynamic bot participation for strategy evaluation
Boyu Qiao
Boyu Qiao
PhD, Information Engineering Institute, Chinese Academy of Sciences
Social bot detectionNature Language Process
K
Kun Li
Institute of Information Engineering, Chinese Academy of Sciences
W
Wei Zhou
Institute of Information Engineering, Chinese Academy of Sciences
S
Songlin Hu
Institute of Information Engineering, Chinese Academy of Sciences, School of Cyber Security, University of Chinese Academy of Sciences