🤖 AI Summary
Current psychological measurement item validation for large language models (LLMs) lacks efficient construct validity assessment methods.
Method: This paper proposes a virtual validation framework grounded in mediation modeling: LLMs generate trait–response mediators—such as cognitive biases and social desirability tendencies—that reflect individual differences and drive simulated respondents’ diverse response behaviors, thereby evaluating items’ robustness in measuring target constructs (Big Five, Schwartz Values, VIA Strengths).
Contribution/Results: This work is the first systematic investigation of LLMs’ potential for psychometric validity validation without requiring large-scale human-annotated data. Experiments demonstrate that LLMs reliably generate theoretically grounded mediators and accurately reproduce expected response patterns across all three major theoretical frameworks. The framework successfully supports item selection and validity evaluation, achieving substantial reductions in validation cost while maintaining methodological rigor.
📝 Abstract
As psychometric surveys are increasingly used to assess the traits of large language models (LLMs), the need for scalable survey item generation suited for LLMs has also grown. A critical challenge here is ensuring the construct validity of generated items, i.e., whether they truly measure the intended trait. Traditionally, this requires costly, large-scale human data collection. To make it efficient, we present a framework for virtual respondent simulation using LLMs. Our central idea is to account for mediators: factors through which the same trait can give rise to varying responses to a survey item. By simulating respondents with diverse mediators, we identify survey items that robustly measure intended traits. Experiments on three psychological trait theories (Big5, Schwartz, VIA) show that our mediator generation methods and simulation framework effectively identify high-validity items. LLMs demonstrate the ability to generate plausible mediators from trait definitions and to simulate respondent behavior for item validation. Our problem formulation, metrics, methodology, and dataset open a new direction for cost-effective survey development and a deeper understanding of how LLMs replicate human-like behavior. We will publicly release our dataset and code to support future work.