Towards Synthesizing Normative Data for Cognitive Assessments Using Generative Multimodal Large Language Models

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cognitive assessment urgently requires standardized normative data as a benchmark; however, novel image-based cognitive tests remain difficult to deploy due to the absence of readily available normative datasets. Traditional normative data collection is costly, time-consuming, and suffers from delayed updates. This study pioneers the use of generative multimodal large language models (GPT-4o and GPT-4o-mini) to synthesize normative textual data for cognitive assessment—explicitly preserving demographic and clinical discriminability. We introduce advanced prompting strategies to significantly enhance the fidelity and diversity of generated responses. A comprehensive evaluation framework—including embedding analysis, BERTScore, ROUGE, BLEU, and LLM-as-a-judge metrics—demonstrates that synthetic data closely approximates real-world normative characteristics, achieving optimal BERTScore performance and high expert-level LLM judgment agreement. Our approach establishes a novel paradigm for low-cost, timely, and iteratively updatable standardization of cognitive assessments.

Technology Category

Application Category

📝 Abstract
Cognitive assessments require normative data as essential benchmarks for evaluating individual performance. Hence, developing new cognitive tests based on novel image stimuli is challenging due to the lack of readily available normative data. Traditional data collection methods are costly, time-consuming, and infrequently updated, limiting their practical utility. Recent advancements in generative multimodal large language models (MLLMs) offer a new approach to generate synthetic normative data from existing cognitive test images. We investigated the feasibility of using MLLMs, specifically GPT-4o and GPT-4o-mini, to synthesize normative textual responses for established image-based cognitive assessments, such as the "Cookie Theft" picture description task. Two distinct prompting strategies-naive prompts with basic instructions and advanced prompts enriched with contextual guidance-were evaluated. Responses were analyzed using embeddings to assess their capacity to distinguish diagnostic groups and demographic variations. Performance metrics included BLEU, ROUGE, BERTScore, and an LLM-as-a-judge evaluation. Advanced prompting strategies produced synthetic responses that more effectively distinguished between diagnostic groups and captured demographic diversity compared to naive prompts. Superior models generated responses exhibiting higher realism and diversity. BERTScore emerged as the most reliable metric for contextual similarity assessment, while BLEU was less effective for evaluating creative outputs. The LLM-as-a-judge approach provided promising preliminary validation results. Our study demonstrates that generative multimodal LLMs, guided by refined prompting methods, can feasibly generate robust synthetic normative data for existing cognitive tests, thereby laying the groundwork for developing novel image-based cognitive assessments without the traditional limitations.
Problem

Research questions and friction points this paper is trying to address.

Lack of normative data for new cognitive tests
High cost and slow updates of traditional data collection
Generating synthetic normative data using multimodal LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using generative multimodal LLMs for synthetic data
Advanced prompting strategies enhance response quality
BERTScore metric reliably assesses contextual similarity
🔎 Similar Papers
No similar papers found.
V
Victoria Yan
The Westminster Schools
H
Honor Chotkowski
Center for Data Science, Nell Hodgson Woodruff School of Nursing, Emory University
F
Fengran Wang
Department of Computer Science, Emory University
Alex Fedorov
Alex Fedorov
Emory University
Representation LearningMultimodal LearningSelf-SupervisionNeuroimaging