🤖 AI Summary
Existing social agents are predominantly designed for specific scenarios and lack a unified theoretical foundation, resulting in poor cross-context generalization and insufficient behavioral consistency and realism. This paper introduces the first generative social agent framework grounded in social cognitive theory, comprising three synergistic modules: motivation modeling, hierarchical behavior planning, and online learning—enabling high-fidelity, interpretable simulation of human social behavior. Built upon large language models, the framework deeply integrates core principles of social cognition to support dynamic adaptation and multi-agent interaction. Experimental evaluation demonstrates a 75% reduction in deviation from ground-truth behavioral data across multiple fidelity metrics compared to baseline approaches. Ablation studies confirm that each module makes significant and non-redundant contributions to behavioral accuracy.
📝 Abstract
Recent advances in large language models have demonstrated strong reasoning and role-playing capabilities, opening new opportunities for agent-based social simulations. However, most existing agents' implementations are scenario-tailored, without a unified framework to guide the design. This lack of a general social agent limits their ability to generalize across different social contexts and to produce consistent, realistic behaviors. To address this challenge, we propose a theory-informed framework that provides a systematic design process for LLM-based social agents. Our framework is grounded in principles from Social Cognition Theory and introduces three key modules: motivation, action planning, and learning. These modules jointly enable agents to reason about their goals, plan coherent actions, and adapt their behavior over time, leading to more flexible and contextually appropriate responses. Comprehensive experiments demonstrate that our theory-driven agents reproduce realistic human behavior patterns under complex conditions, achieving up to 75% lower deviation from real-world behavioral data across multiple fidelity metrics compared to classical generative baselines. Ablation studies further show that removing motivation, planning, or learning modules increases errors by 1.5 to 3.2 times, confirming their distinct and essential contributions to generating realistic and coherent social behaviors.