🤖 AI Summary
Generative AI search often undermines users’ metacognitive engagement, erodes critical thinking, and fosters cognitive passivity through overreliance on AI outputs. Method: This study introduces structured metacognitive prompting—comprising pause, reflection, multi-perspective evaluation, and comprehension verification—into the GenAI search workflow. A mixed-methods investigation (N=40 undergraduate participants) integrated behavioral logging, in-depth interviews, and qualitative coding. Results: The intervention significantly increased question depth (+42%), thematic breadth, and multi-perspective awareness; 87% of participants engaged more proactively in evaluating AI outputs and distilling core insights. Crucially, the study demonstrates that metacognitive prompts effectively activate higher-order cognitive regulation and identifies *metacognitive flexibility*—an individual’s capacity to dynamically adapt metacognitive strategies—as the key moderator of intervention efficacy. These findings advance theoretical understanding of human-AI co-reasoning and provide empirically grounded design principles for fostering reflective, cognitively empowering AI interaction paradigms.
📝 Abstract
The growing use of Generative AI (GenAI) conversational search tools has raised concerns about their effects on people's metacognitive engagement, critical thinking, and learning. As people increasingly rely on GenAI to perform tasks such as analyzing and applying information, they may become less actively engaged in thinking and learning. This study examines whether metacognitive prompts - designed to encourage people to pause, reflect, assess their understanding, and consider multiple perspectives - can support critical thinking during GenAI-based search. We conducted a user study (N=40) with university students to investigate the impact of metacognitive prompts on their thought processes and search behaviors while searching with a GenAI tool. We found that these prompts led to more active engagement, prompting students to explore a broader range of topics and engage in deeper inquiry through follow-up queries. Students reported that the prompts were especially helpful for considering overlooked perspectives, promoting evaluation of AI responses, and identifying key takeaways. Additionally, the effectiveness of these prompts was influenced by students' metacognitive flexibility. Our findings highlight the potential of metacognitive prompts to foster critical thinking and provide insights for designing and implementing metacognitive support in human-AI interactions.