🤖 AI Summary
This study investigates the alignment between large language models (LLMs) and human social reasoning in collective decision-making, specifically examining whether LLMs reproduce or mitigate human cognitive biases.
Method: Leveraging the “Lost at Sea” psychological task, we conduct large-scale human experiments and parallel conditional simulations using Gemini, GPT-4.1, Claude Haiku 3.5, and Gemma 3. A novel group-level human–AI alignment evaluation framework is applied to analyze model reasoning under explicit social cues.
Contribution/Results: We identify significant behavioral heterogeneity across models—some replicate human biases, while others actively compensate. Alignment is jointly moderated by situational cues and architectural properties. Critically, this work introduces the first dynamic, collective-scale social alignment assessment paradigm, empirically demonstrating the limitations of static benchmarks. The findings provide both theoretical grounding and empirical evidence for building trustworthy human–AI collaborative decision systems.
📝 Abstract
As large language models (LLMs) are increasingly used to model and augment collective decision-making, it is critical to examine their alignment with human social reasoning. We present an empirical framework for assessing collective alignment, in contrast to prior work on the individual level. Using the Lost at Sea social psychology task, we conduct a large-scale online experiment (N=748), randomly assigning groups to leader elections with either visible demographic attributes (e.g. name, gender) or pseudonymous aliases. We then simulate matched LLM groups conditioned on the human data, benchmarking Gemini 2.5, GPT 4.1, Claude Haiku 3.5, and Gemma 3. LLM behaviors diverge: some mirror human biases; others mask these biases and attempt to compensate for them. We empirically demonstrate that human-AI alignment in collective reasoning depends on context, cues, and model-specific inductive biases. Understanding how LLMs align with collective human behavior is critical to advancing socially-aligned AI, and demands dynamic benchmarks that capture the complexities of collective reasoning.