NomicLaw: Emergent Trust and Strategic Argumentation in LLMs During Collaborative Law-Making

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the socio-cognitive behaviors of large language models (LLMs) in open-ended, multi-agent legal deliberation. To address the lack of empirically grounded frameworks for institutional AI collaboration, the authors introduce the first collaborative legislative simulation framework, wherein multiple LLM agents propose rules, engage in ethical discourse, and conduct dynamic voting within authentic legal dilemmas. Methodologically, the work integrates formal voting dynamics modeling, qualitative discourse analysis, and controlled cross-model interaction experiments—spanning both homogeneous and heterogeneous LLM configurations. Results demonstrate, for the first time, the emergent social intelligence of LLMs: spontaneous trust formation, strategic coalition-building, and rhetorical adaptation. Experiments across ten open-source models reveal quantifiable, regular patterns in argumentation strategies, reciprocity preferences, and collective decision-making tendencies. This work establishes a foundational empirical basis and methodological paradigm for integrating AI into formal, institutionally structured deliberative processes.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) have extended their capabilities from basic text processing to complex reasoning tasks, including legal interpretation, argumentation, and strategic interaction. However, empirical understanding of LLM behavior in open-ended, multi-agent settings especially those involving deliberation over legal and ethical dilemmas remains limited. We introduce NomicLaw, a structured multi-agent simulation where LLMs engage in collaborative law-making, responding to complex legal vignettes by proposing rules, justifying them, and voting on peer proposals. We quantitatively measure trust and reciprocity via voting patterns and qualitatively assess how agents use strategic language to justify proposals and influence outcomes. Experiments involving homogeneous and heterogeneous LLM groups demonstrate how agents spontaneously form alliances, betray trust, and adapt their rhetoric to shape collective decisions. Our results highlight the latent social reasoning and persuasive capabilities of ten open-source LLMs and provide insights into the design of future AI systems capable of autonomous negotiation, coordination and drafting legislation in legal settings.
Problem

Research questions and friction points this paper is trying to address.

Understanding LLM behavior in multi-agent legal deliberation
Measuring trust and reciprocity in collaborative law-making simulations
Assessing strategic language use in LLM legal argumentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent simulation for collaborative law-making
Quantitative trust measurement via voting patterns
Qualitative strategic language analysis for influence
🔎 Similar Papers
No similar papers found.