🤖 AI Summary
The mechanisms by which social bots influence misinformation dissemination and correction remain poorly understood; existing models oversimplify bot behavior and lack quantitative evaluation. Method: This paper proposes MADD, a Multi-Agent Dynamic Simulation framework integrating realistic user attributes, scale-free network topology (Barabási–Albert model), and community structure (stochastic block model). MADD is the first to explicitly model heterogeneous dynamic behaviors of malicious versus benign bots and supports differential simulation and quantitative assessment of two correction strategies—fact-based and narrative-based. Contribution/Results: Validated via individual- and population-level metrics, MADD demonstrates empirical fidelity in both topological and behavioral dimensions. It successfully reproduces six distinct misinformation propagation scenarios and reveals a critical insight: correction efficacy is jointly governed by network structure and users’ cognitive characteristics.
📝 Abstract
In the human-bot symbiotic information ecosystem, social bots play key roles in spreading and correcting disinformation. Understanding their influence is essential for risk control and better governance. However, current studies often rely on simplistic user and network modeling, overlook the dynamic behavior of bots, and lack quantitative evaluation of correction strategies. To fill these gaps, we propose MADD, a Multi Agent based framework for Disinformation Dissemination. MADD constructs a more realistic propagation network by integrating the Barabasi Albert Model for scale free topology and the Stochastic Block Model for community structures, while designing node attributes based on real world user data. Furthermore, MADD incorporates both malicious and legitimate bots, with their controlled dynamic participation allows for quantitative analysis of correction strategies. We evaluate MADD using individual and group level metrics. We experimentally verify the real world consistency of MADD user attributes and network structure, and we simulate the dissemination of six disinformation topics, demonstrating the differential effects of fact based and narrative based correction strategies.