Understanding Critical Thinking in Generative Artificial Intelligence Use: Development, Validation, and Correlates of the Critical Thinking in AI Use Scale

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The lack of measurable critical thinking in generative AI use hinders assessment and intervention. Method: We developed and validated the 13-item “Critical Thinking in AI Use Scale” (AI-CT) through iterative scale development, multi-group confirmatory factor analysis (testing measurement invariance across gender), and structural equation modeling. Grounded in theory and empirical validation, the scale reveals a novel three-factor higher-order structure: source/content verification, motivation to understand AI model mechanisms and limitations, and reflection on societal implications of AI dependence. To enhance ecological validity, we integrated a ChatGPT-driven factual verification task. Contribution/Results: The AI-CT demonstrates strong reliability and construct validity. High scorers engage more frequently and diversely in AI output verification, achieve significantly higher factual judgment accuracy, and exhibit deeper reflection on responsible AI use. This scale provides the first empirically grounded, structurally coherent instrument for assessing and fostering AI literacy.

Technology Category

Application Category

📝 Abstract
Generative AI tools are increasingly embedded in everyday work and learning, yet their fluency, opacity, and propensity to hallucinate mean that users must critically evaluate AI outputs rather than accept them at face value. The present research conceptualises critical thinking in AI use as a dispositional tendency to verify the source and content of AI-generated information, to understand how models work and where they fail, and to reflect on the broader implications of relying on AI. Across six studies (N = 1365), we developed and validated the 13-item critical thinking in AI use scale and mapped its nomological network. Study 1 generated and content-validated scale items. Study 2 supported a three-factor structure (Verification, Motivation, and Reflection). Studies 3, 4, and 5 confirmed this higher-order model, demonstrated internal consistency and test-retest reliability, strong factor loadings, sex invariance, and convergent and discriminant validity. Studies 3 and 4 further revealed that critical thinking in AI use was positively associated with openness, extraversion, positive trait affect, and frequency of AI use. Lastly, Study 6 demonstrated criterion validity of the scale, with higher critical thinking in AI use scores predicting more frequent and diverse verification strategies, greater veracity-judgement accuracy in a novel and naturalistic ChatGPT-powered fact-checking task, and deeper reflection about responsible AI. Taken together, the current work clarifies why and how people exercise oversight over generative AI outputs and provides a validated scale and ecologically grounded task paradigm to support theory testing, cross-group, and longitudinal research on critical engagement with generative AI outputs.
Problem

Research questions and friction points this paper is trying to address.

Develops a scale to measure critical thinking in AI use.
Validates the scale's structure and reliability across studies.
Examines how critical thinking predicts verification and reflection behaviors.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed a 13-item scale measuring critical thinking in AI use
Validated a three-factor structure: Verification, Motivation, and Reflection
Created an ecological task paradigm for assessing AI verification strategies
🔎 Similar Papers
No similar papers found.
G
Gabriel R. Lau
School of Social Sciences, Nanyang Technological University, Singapore
W
Wei Yan Low
Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore
Louis Tay
Louis Tay
William C. Byham Professor of Industrial-Organizational Psychology, Purdue University
Well-beingCharacterVocational InterestsMeasurementMachine Learning
Y
Ysabel Guevarra
School of Social Sciences, Singapore Management University, Singapore
D
Dragan Gašević
Faculty of Information Technology, Monash University, Victoria, Australia
A
Andree Hartanto
School of Social Sciences, Singapore Management University, Singapore