Can Finetuing LLMs on Small Human Samples Increase Heterogeneity, Alignment, and Belief-Action Coherence?

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit critical limitations—insufficient behavioral diversity, subgroup misalignment, lack of intra-group variation, and belief–action inconsistency—when substituting for human participants in social survey simulations. Method: This study conducts the first systematic evaluation of supervised fine-tuning using small-scale human behavioral data (from an information disclosure experiment) to enhance LLM fidelity in simulating human responses. Contribution/Results: Fine-tuning significantly improves distributional heterogeneity, cross-subgroup alignment, and belief–behavior consistency in model outputs. However, it fails to replicate key regression coefficients from the original empirical study, indicating that current fine-tuned LLMs remain unsuitable for causal inference. This work establishes a methodological foundation and delineates epistemic boundaries for the credible use of LLMs in social science simulation.

Technology Category

Application Category

📝 Abstract
There is ongoing debate about whether large language models (LLMs) can serve as substitutes for human participants in survey and experimental research. While recent work in fields such as marketing and psychology has explored the potential of LLM-based simulation, a growing body of evidence cautions against this practice: LLMs often fail to align with real human behavior, exhibiting limited diversity, systematic misalignment for minority subgroups, insufficient within-group variance, and discrepancies between stated beliefs and actions. This study examines an important and distinct question in this domain: whether fine-tuning on a small subset of human survey data, such as that obtainable from a pilot study, can mitigate these issues and yield realistic simulated outcomes. Using a behavioral experiment on information disclosure, we compare human and LLM-generated responses across multiple dimensions, including distributional divergence, subgroup alignment, belief-action coherence, and the recovery of regression coefficients. We find that fine-tuning on small human samples substantially improves heterogeneity, alignment, and belief-action coherence relative to the base model. However, even the best-performing fine-tuned models fail to reproduce the regression coefficients of the original study, suggesting that LLM-generated data remain unsuitable for replacing human participants in formal inferential analyses.
Problem

Research questions and friction points this paper is trying to address.

Investigates if fine-tuning LLMs on small human samples improves heterogeneity and alignment
Examines whether fine-tuning enhances belief-action coherence in simulated human responses
Assesses if fine-tuned LLMs can reproduce regression coefficients from human studies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning LLMs on small human samples
Improves heterogeneity, alignment, and belief-action coherence
Fails to reproduce regression coefficients accurately
🔎 Similar Papers
No similar papers found.