Learning to Make Friends: Coaching LLM Agents toward Emergent Social Ties

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language model (LLM) agents can replicate core human online social dynamics—namely homophily, reciprocity, and social identity—and identifies the underlying memory and learning mechanisms enabling such behavior. To this end, we propose the first multi-agent LLM simulation framework integrating *coach signals*—structured external guidance—with empirically grounded behavioral reward functions, combined with in-context learning and structured social feedback to model emergent phenomena spanning individual decision-making to collective network topology. Experimental results demonstrate that agents spontaneously form stable interaction ties, exhibit empathetic support behaviors, and generate social network structures closely approximating those observed in real-world online communities. Crucially, this work introduces coach-based adaptation for LLMs in social contexts and provides systematic empirical validation of the pivotal role behavioral rewards play in driving the emergence of authentic social dynamics.

Technology Category

Application Category

📝 Abstract
Can large language model (LLM) agents reproduce the complex social dynamics that characterize human online behavior -- shaped by homophily, reciprocity, and social validation -- and what memory and learning mechanisms enable such dynamics to emerge? We present a multi-agent LLM simulation framework in which agents repeatedly interact, evaluate one another, and adapt their behavior through in-context learning accelerated by a coaching signal. To model human social behavior, we design behavioral reward functions that capture core drivers of online engagement, including social interaction, information seeking, self-presentation, coordination, and emotional support. These rewards align agent objectives with empirically observed user motivations, enabling the study of how network structures and group formations emerge from individual decision-making. Our experiments show that coached LLM agents develop stable interaction patterns and form emergent social ties, yielding network structures that mirror properties of real online communities. By combining behavioral rewards with in-context adaptation, our framework establishes a principled testbed for investigating collective dynamics in LLM populations and reveals how artificial agents may approximate or diverge from human-like social behavior.
Problem

Research questions and friction points this paper is trying to address.

Reproducing human social dynamics through LLM agent interactions
Developing emergent social ties via behavioral reward functions
Investigating collective dynamics in artificial agent populations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coaching signal accelerates in-context learning
Behavioral reward functions model human motivations
Framework enables emergent social ties and networks
🔎 Similar Papers
No similar papers found.
Philipp J. Schneider
Philipp J. Schneider
PhD Candidate, EPFL
Machine LearningData ScienceNetwork ScienceBusiness Analytics
L
Lin Tian
University of Technology Sydney, Sydney, Australia
M
Marian-Andrei Rizoiu
University of Technology Sydney, Sydney, Australia