🤖 AI Summary
This study investigates belief dynamics in LLM-based multi-agent systems—specifically, how an explicit “belief box” (encoding propositional beliefs and associated confidence scores) modulates belief stability, persuasive efficacy, and responsiveness to counterarguments, and how the directive “remain open-minded” affects belief plasticity. Method: We propose a prompt-engineering technique to construct belief boxes and integrate openness-oriented behavioral instructions, simulating social influence processes within multi-agent debate scenarios. Contribution/Results: (1) Belief boxes significantly improve agents’ robustness against interference and enhance persuasive performance; (2) Openness directives effectively increase belief plasticity, especially under group pressure, facilitating acceptance of opposing views; (3) Belief strength calibration serves as a critical lever for balancing belief retention and revision. This work provides the first systematic empirical validation that explicit belief representation enables interpretable, controllable regulation of LLMs’ social reasoning capabilities.
📝 Abstract
As multi-agent systems are increasingly utilized for reasoning and decision-making applications, there is a greater need for LLM-based agents to have something resembling propositional beliefs. One simple method for doing so is to include statements describing beliefs maintained in the prompt space (in what we'll call their belief boxes). But when agents have such statements in belief boxes, how does it actually affect their behaviors and dispositions towards those beliefs? And does it significantly affect agents' ability to be persuasive in multi-agent scenarios? Likewise, if the agents are given instructions to be open-minded, how does that affect their behaviors? We explore these and related questions in a series of experiments. Our findings confirm that instructing agents to be open-minded affects how amenable they are to belief change. We show that incorporating belief statements and their strengths influences an agent's resistance to (and persuasiveness against) opposing viewpoints. Furthermore, it affects the likelihood of belief change, particularly when the agent is outnumbered in a debate by opposing viewpoints, i.e., peer pressure scenarios. The results demonstrate the feasibility and validity of the belief box technique in reasoning and decision-making tasks.