🤖 AI Summary
To address the limited influence propagation in online social networks caused by bounded confidence—where users are only influenced by peers with similar opinions and resist direct persuasion—this paper proposes a progressive opinion guidance strategy. We innovatively integrate control theory into bounded-confidence opinion dynamics modeling, jointly optimizing multi-agent target selection and stepwise intervention paths. Furthermore, we establish a closed-loop framework that translates mathematical control strategies into deployable persuasive messages automatically generated by large language models (e.g., ChatGPT). Evaluated on realistic Twitter network simulations, our approach significantly improves group-level opinion controllability: it enables precise steering of average opinion shifts, effective polarization suppression or amplification, and outperforms conventional influence models that neglect bounded confidence constraints.
📝 Abstract
Influence campaigns in online social networks are often run by organizations, political parties, and nation states to influence large audiences. These campaigns are employed through the use of agents in the network that share persuasive content. Yet, their impact might be minimal if the audiences remain unswayed, often due to the bounded confidence phenomenon, where only a narrow spectrum of viewpoints can influence them. Here we show that to persuade under bounded confidence, an agent must nudge its targets to gradually shift their opinions. Using a control theory approach, we show how to construct an agent's nudging policy under the bounded confidence opinion dynamics model and also how to select targets for multiple agents in an influence campaign on a social network. Simulations on real Twitter networks show that a multi-agent nudging policy can shift the mean opinion, decrease opinion polarization, or even increase it. We find that our nudging based policies outperform other common techniques that do not consider the bounded confidence effect. Finally, we show how to craft prompts for large language models, such as ChatGPT, to generate text-based content for real nudging policies. This illustrates the practical feasibility of our approach, allowing one to go from mathematical nudging policies to real social media content.