Simulating Online Social Media Conversations on Controversial Topics Using AI Agents Calibrated on Real-World Data

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language model (LLM)-driven AI agents can high-fidelity simulate user discourse and opinion evolution surrounding contentious topics on social media. Methodologically, agent initial beliefs and social connections are calibrated using real-world election data and microblog network topology, while an opinion dynamics mechanism is integrated into a multi-agent simulation framework. Key contributions include: (1) the first unified modeling of LLM agents’ natural language generation, network topology co-evolution, and opinion dynamics; (2) empirical validation that the framework reproduces macro-level opinion polarization trends observed in real data, yet reveals systematic limitations in capturing linguistic tone diversity, heterogeneity in toxic expression, and micro-level behavioral fidelity; and (3) identification of improved initial belief calibration—rather than solely enhancing generative capabilities—as the critical pathway to mitigating behavioral distortion in LLM-based social simulation.

Technology Category

Application Category

📝 Abstract
Online social networks offer a valuable lens to analyze both individual and collective phenomena. Researchers often use simulators to explore controlled scenarios, and the integration of Large Language Models (LLMs) makes these simulations more realistic by enabling agents to understand and generate natural language content. In this work, we investigate the behavior of LLM-based agents in a simulated microblogging social network. We initialize agents with realistic profiles calibrated on real-world online conversations from the 2022 Italian political election and extend an existing simulator by introducing mechanisms for opinion modeling. We examine how LLM agents simulate online conversations, interact with others, and evolve their opinions under different scenarios. Our results show that LLM agents generate coherent content, form connections, and build a realistic social network structure. However, their generated content displays less heterogeneity in tone and toxicity compared to real data. We also find that LLM-based opinion dynamics evolve over time in ways similar to traditional mathematical models. Varying parameter configurations produces no significant changes, indicating that simulations require more careful cognitive modeling at initialization to replicate human behavior more faithfully. Overall, we demonstrate the potential of LLMs for simulating user behavior in social environments, while also identifying key challenges in capturing heterogeneity and complex dynamics.
Problem

Research questions and friction points this paper is trying to address.

Simulating online conversations on controversial topics using AI agents
Investigating how LLM-based agents interact and evolve opinions in social networks
Assessing realism of LLM-generated content compared to real human behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agents calibrated on real-world data
Extended simulator with opinion modeling mechanisms
Simulated microblogging network for conversation analysis
🔎 Similar Papers
No similar papers found.
E
Elisa Composta
DEIB, Politecnico di Milano
N
Nicolo' Fontana
DEIB, Politecnico di Milano
Francesco Corso
Francesco Corso
Ph.D Student, Politecnico di Milano
Computer Science
F
Francesco Pierri
DEIB, Politecnico di Milano