Psychometric Item Validation Using Virtual Respondents with Trait-Response Mediators

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current psychological measurement item validation for large language models (LLMs) lacks efficient construct validity assessment methods. Method: This paper proposes a virtual validation framework grounded in mediation modeling: LLMs generate trait–response mediators—such as cognitive biases and social desirability tendencies—that reflect individual differences and drive simulated respondents’ diverse response behaviors, thereby evaluating items’ robustness in measuring target constructs (Big Five, Schwartz Values, VIA Strengths). Contribution/Results: This work is the first systematic investigation of LLMs’ potential for psychometric validity validation without requiring large-scale human-annotated data. Experiments demonstrate that LLMs reliably generate theoretically grounded mediators and accurately reproduce expected response patterns across all three major theoretical frameworks. The framework successfully supports item selection and validity evaluation, achieving substantial reductions in validation cost while maintaining methodological rigor.

Technology Category

Application Category

📝 Abstract
As psychometric surveys are increasingly used to assess the traits of large language models (LLMs), the need for scalable survey item generation suited for LLMs has also grown. A critical challenge here is ensuring the construct validity of generated items, i.e., whether they truly measure the intended trait. Traditionally, this requires costly, large-scale human data collection. To make it efficient, we present a framework for virtual respondent simulation using LLMs. Our central idea is to account for mediators: factors through which the same trait can give rise to varying responses to a survey item. By simulating respondents with diverse mediators, we identify survey items that robustly measure intended traits. Experiments on three psychological trait theories (Big5, Schwartz, VIA) show that our mediator generation methods and simulation framework effectively identify high-validity items. LLMs demonstrate the ability to generate plausible mediators from trait definitions and to simulate respondent behavior for item validation. Our problem formulation, metrics, methodology, and dataset open a new direction for cost-effective survey development and a deeper understanding of how LLMs replicate human-like behavior. We will publicly release our dataset and code to support future work.
Problem

Research questions and friction points this paper is trying to address.

Ensuring construct validity of psychometric survey items for LLMs
Reducing costly human data collection for item validation
Simulating diverse virtual respondents to identify robust survey items
Innovation

Methods, ideas, or system contributions that make the work stand out.

Virtual respondent simulation using LLMs
Mediator generation for trait-response diversity
Cost-effective survey item validation framework
🔎 Similar Papers
No similar papers found.
S
Sungjib Lim
Graduate School of Data Science, Seoul National University
W
Woojung Song
Department of Information System, Hanyang University
Eun-Ju Lee
Eun-Ju Lee
Seoul National University
computer-mediated communicationsocial cognitionsocial influence
Yohan Jo
Yohan Jo
Seoul National University
Natural Language ProcessingAgentsComputational PsychologyReasoning