🤖 AI Summary
This paper investigates the potential and challenges of integrating large language models (LLMs) across the full lifecycle of agent-based modeling (ABM)—from problem formulation and model design to implementation, analysis, interpretation, and dissemination. Adopting a process-oriented approach, it systematically maps LLM capabilities—including text generation, information extraction, logical reasoning, and natural language interaction—to each ABM stage, offering the first comprehensive mapping of such synergies. The study proposes a critical integration framework that delineates current practical boundaries, technical limitations (e.g., interpretability, reliability, domain adaptability), and key risks (e.g., hallucination, bias, lack of formal validation). By synthesizing empirical insights and conceptual analysis, the work establishes foundational principles and methodological guidelines for the safe, transparent, and trustworthy incorporation of LLMs in computational social science and complex systems modeling. (149 words)
📝 Abstract
The emergence of Large Language Models (LLMs) with increasingly sophisticated natural language understanding and generative capabilities has sparked interest in the Agent-based Modelling (ABM) community. With their ability to summarize, generate, analyze, categorize, transcribe and translate text, answer questions, propose explanations, sustain dialogue, extract information from unstructured text, and perform logical reasoning and problem-solving tasks, LLMs have a good potential to contribute to the modelling process. After reviewing the current use of LLMs in ABM, this study reflects on the opportunities and challenges of the potential use of LLMs in ABM. It does so by following the modelling cycle, from problem formulation to documentation and communication of model results, and holding a critical stance.