Trust and Reliance on AI in Education: AI Literacy and Need for Cognition as Moderators

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how students’ trust in AI assistants influences their calibrated reliance during programming problem solving, particularly their ability to discern between accurate and erroneous advice. Through a behavioral experiment involving a Python output prediction task and pre- and post-study questionnaires, the research examines university students’ adoption of correct versus misleading suggestions from a generative AI system, while analyzing the moderating roles of AI literacy and need for cognition. Findings reveal a nonlinear relationship between trust and calibrated reliance: high trust paradoxically impairs critical judgment, whereas high AI literacy and a strong need for cognition significantly mitigate this adverse effect, fostering more discerning use of AI-generated advice. This work provides the first empirical evidence of these moderating mechanisms, offering theoretical and practical insights into human–AI collaboration dynamics.
📝 Abstract
As generative AI systems are integrated into educational settings, students often encounter AI-generated output while working through learning tasks, either by requesting help or through integrated tools. Trust in AI can influence how students interpret and use that output, including whether they evaluate it critically or exhibit overreliance. We investigate how students' trust relates to their appropriate reliance on an AI assistant during programming problem-solving tasks, and whether this relationship differs by learner characteristics. With 432 undergraduate participants, students' completed Python output-prediction problems while receiving recommendations and explanations from an AI chatbot, including accurate and intentionally misleading suggestions. We operationalize reliance behaviorally as the extent to which students' responses reflected appropriate use of the AI assistant's suggestions, accepting them when they were correct and rejecting them when they were incorrect. Pre- and post-task surveys assessed trust in the assistant, AI literacy, need for cognition, programming self-efficacy, and programming literacy. Results showed a non-linear relationship in which higher trust was associated with lower appropriate reliance, suggesting weaker discrimination between correct and incorrect recommendations. This relationship was significantly moderated by students' AI literacy and need for cognition. These findings highlight the need for future work on instructional and system supports that encourage more reflective evaluation of AI assistance during problem-solving.
Problem

Research questions and friction points this paper is trying to address.

trust in AI
appropriate reliance
AI literacy
need for cognition
overreliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

appropriate reliance
AI literacy
need for cognition
trust in AI
generative AI in education
🔎 Similar Papers
No similar papers found.