🤖 AI Summary
This study investigates the potential of large language models (LLMs) to predict the reproducibility of empirical behavioral science research. We propose the first generative evaluation framework integrating both open-source (Llama 3-8B, Qwen2-7B, Mistral-7B) and closed-source (GPT-4o) models: instruction-tuned LLMs generate synthetic participant responses, model effect sizes, and classify whether original findings are replicable. A key finding is that sampling temperature critically modulates bias in effect-size estimation, revealing the substantial impact of generative determinism on scientific inference. Empirically, Mistral-7B achieves 77% F1 score, while GPT-4o and Llama 3-8B attain 67% each—demonstrating practical utility for reproducibility screening. This work pioneers the systematic application of multi-source LLMs to reproducibility prediction, establishing a novel paradigm for automated scientific quality assessment.
📝 Abstract
In this study, we investigate whether LLMs can be used to indicate if a study in the behavioural social sciences is replicable. Using a dataset of 14 previously replicated studies (9 successful, 5 unsuccessful), we evaluate the ability of both open-source (Llama 3 8B, Qwen 2 7B, Mistral 7B) and proprietary (GPT-4o) instruction-tuned LLMs to discriminate between replicable and non-replicable findings. We use LLMs to generate synthetic samples of responses from behavioural studies and estimate whether the measured effects support the original findings. When compared with human replication results for these studies, we achieve F1 values of up to $77%$ with Mistral 7B, $67%$ with GPT-4o and Llama 3 8B, and $55%$ with Qwen 2 7B, suggesting their potential for this task. We also analyse how effect size calculations are affected by sampling temperature and find that low variance (due to temperature) leads to biased effect estimates.