Identifying Non-Replicable Social Science Studies with Language Models

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the potential of large language models (LLMs) to predict the reproducibility of empirical behavioral science research. We propose the first generative evaluation framework integrating both open-source (Llama 3-8B, Qwen2-7B, Mistral-7B) and closed-source (GPT-4o) models: instruction-tuned LLMs generate synthetic participant responses, model effect sizes, and classify whether original findings are replicable. A key finding is that sampling temperature critically modulates bias in effect-size estimation, revealing the substantial impact of generative determinism on scientific inference. Empirically, Mistral-7B achieves 77% F1 score, while GPT-4o and Llama 3-8B attain 67% each—demonstrating practical utility for reproducibility screening. This work pioneers the systematic application of multi-source LLMs to reproducibility prediction, establishing a novel paradigm for automated scientific quality assessment.

Technology Category

Application Category

📝 Abstract
In this study, we investigate whether LLMs can be used to indicate if a study in the behavioural social sciences is replicable. Using a dataset of 14 previously replicated studies (9 successful, 5 unsuccessful), we evaluate the ability of both open-source (Llama 3 8B, Qwen 2 7B, Mistral 7B) and proprietary (GPT-4o) instruction-tuned LLMs to discriminate between replicable and non-replicable findings. We use LLMs to generate synthetic samples of responses from behavioural studies and estimate whether the measured effects support the original findings. When compared with human replication results for these studies, we achieve F1 values of up to $77%$ with Mistral 7B, $67%$ with GPT-4o and Llama 3 8B, and $55%$ with Qwen 2 7B, suggesting their potential for this task. We also analyse how effect size calculations are affected by sampling temperature and find that low variance (due to temperature) leads to biased effect estimates.
Problem

Research questions and friction points this paper is trying to address.

Assessing replicability of social science studies using LLMs.
Evaluating LLMs' ability to distinguish replicable findings.
Analyzing effect size bias due to sampling temperature.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs assess social science study replicability.
Synthetic response samples generated for analysis.
Effect size bias linked to sampling temperature.
🔎 Similar Papers
No similar papers found.
D
Denitsa Saynova
Chalmers University of Technology, University of Gothenburg
K
Kajsa Hansson
Lund University
B
B. Bruinsma
Chalmers University of Technology, University of Gothenburg
A
Annika Fred'en
Lund University
Moa Johansson
Moa Johansson
Associate Professor (Docent), Chalmers University
Neuro-symbolic AIAI for mathsAutomated ReasoningAI in SportsNLP