🤖 AI Summary
This study investigates the potential adverse effects of generative AI (e.g., ChatGPT) on secondary school students’ programming learning. Method: A controlled experiment tracked 37 students’ authentic interactions during code comprehension and improvement tasks, triangulating behavioral logs, structured task designs, qualitative coding, and self-report questionnaires. Contribution/Results: We empirically identify two dominant interaction strategies—“concept-oriented querying” (focusing on underlying principles) and “full-solution generation” (requesting complete code)—and uncover a detrimental “error submission–repair request” cycle, significantly correlated with self-reported AI usage frequency. Only 23 students actively employed AI, most opting for direct code generation; frequent users preferred generative, solution-oriented prompts. Results indicate that excessive reliance on generative AI substantially impairs debugging proficiency and undermines programming autonomy.
📝 Abstract
Programming students have a widespread access to powerful Generative AI tools like ChatGPT. While this can help understand the learning material and assist with exercises, educators are voicing more and more concerns about an over-reliance on generated outputs and lack of critical thinking skills. It is thus important to understand how students actually use generative AI and what impact this could have on their learning behavior. To this end, we conducted a study including an exploratory experiment with 37 programming students, giving them monitored access to ChatGPT while solving a code understanding and improving exercise. While only 23 of the students actually opted to use the chatbot, the majority of those eventually prompted it to simply generate a full solution. We observed two prevalent usage strategies: to seek knowledge about general concepts and to directly generate solutions. Instead of using the bot to comprehend the code and their own mistakes, students often got trapped in a vicious cycle of submitting wrong generated code and then asking the bot for a fix. Those who self-reported using generative AI regularly were more likely to prompt the bot to generate a solution. Our findings indicate that concerns about potential decrease in programmers' agency and productivity with Generative AI are justified. We discuss how researchers and educators can respond to the potential risk of students uncritically over-relying on generative AI. We also discuss potential modifications to our study design for large-scale replications.