Improving Student-AI Interaction Through Pedagogical Prompting: An Example in Computer Science Education

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Students’ overreliance on large language models (LLMs) for answers undermines deep learning and metacognitive development in foundational computer science courses. Method: This study proposes a “pedagogical prompting” theoretical framework and designs, implements, and evaluates a learning-oriented AI-mediated instructional intervention. Employing mixed-methods research—including surveys, interactive system development, contextualized classroom experiments, and iterative refinement based on instructor and student feedback—the intervention emphasizes scaffolding over answer provision. Contribution/Results: Results demonstrate statistically significant improvement in novices’ AI-assisted learning help-seeking competence (p < 0.01), high system acceptance (Net Promoter Score = +42), and increased intent to adopt the tool in future learning. Critically, the approach successfully reframes generative AI from an “answer engine” to a “learning scaffold,” supporting conceptual understanding and self-regulated learning. The intervention is scalable and shows strong potential for integration into large-enrollment CS instruction.

Technology Category

Application Category

📝 Abstract
With the proliferation of large language model (LLM) applications since 2022, their use in education has sparked both excitement and concern. Recent studies consistently highlight students' (mis)use of LLMs can hinder learning outcomes. This work aims to teach students how to effectively prompt LLMs to improve their learning. We first proposed pedagogical prompting, a theoretically-grounded new concept to elicit learning-oriented responses from LLMs. To move from concept design to a proof-of-concept learning intervention in real educational settings, we selected early undergraduate CS education (CS1/CS2) as the example context. We began with a formative survey study with instructors (N=36) teaching early-stage undergraduate-level CS courses to inform the instructional design based on classroom needs. Based on their insights, we designed and developed a learning intervention through an interactive system with scenario-based instruction to train pedagogical prompting skills. Finally, we evaluated its instructional effectiveness through a user study with CS novice students (N=22) using pre/post-tests. Through mixed methods analyses, our results indicate significant improvements in learners' LLM-based pedagogical help-seeking skills, along with positive attitudes toward the system and increased willingness to use pedagogical prompts in the future. Our contributions include (1) a theoretical framework of pedagogical prompting; (2) empirical insights into current instructor attitudes toward pedagogical prompting; and (3) a learning intervention design with an interactive learning tool and scenario-based instruction leading to promising results on teaching LLM-based help-seeking. Our approach is scalable for broader implementation in classrooms and has the potential to be integrated into tools like ChatGPT as an on-boarding experience to encourage learning-oriented use of generative AI.
Problem

Research questions and friction points this paper is trying to address.

Teaching students effective LLM prompting for better learning
Developing pedagogical prompting to enhance student-AI interaction
Improving CS education through learning-oriented AI responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pedagogical prompting for learning-oriented LLM responses
Interactive system with scenario-based instruction
Evaluated effectiveness via pre/post-tests with students
🔎 Similar Papers
No similar papers found.