🤖 AI Summary
This study investigates the pedagogical impact of generative AI teaching assistants in graduate-level advanced quantitative methods courses and examines heterogeneity in student AI usage.
Method: Drawing on question logs, surveys, and in-depth interviews from 20 students, we employed Bloom’s Taxonomy and the CLEAR framework for qualitative coding, complemented by t-tests and Poisson regression for quantitative analysis.
Contribution/Results: We identify a novel U-shaped distribution in AI query frequency—students with weaker mathematical foundations ask more frequent but less logically structured questions, predominantly at the knowledge/comprehension levels; those with stronger foundations ask fewer but deeper, higher-order questions. This pattern exhibits systematic associations with cognitive taxonomy levels and disciplinary preparedness. We further pinpoint critical intervention windows and propose a tiered scaffolding strategy. The findings provide empirical evidence and actionable guidelines for leveraging AI to support differentiated instruction in quantitative education.
📝 Abstract
The study was conducted in an Advanced Quantitative Research Methods course involving 20 graduate students. During the course, student inquiries made to the AI were recorded and coded using Bloom's taxonomy and the CLEAR framework. A series of independent sample t-tests and poisson regression analyses were employed to analyse the characteristics of different questions asked by students with different backgrounds. Post course interviews were conducted with 10 students to gain deeper insights into their perceptions. The findings revealed a U-shaped pattern in students' use of the AI assistant, with higher usage at the beginning and towards the end of the course, and a decrease in usage during the middle weeks. Most questions posed to the AI focused on knowledge and comprehension levels, with fewer questions involving deeper cognitive thinking. Students with a weaker mathematical foundation used the AI assistant more frequently, though their inquiries tended to lack explicit and logical structure compared to those with a strong mathematical foundation, who engaged less with the tool. These patterns suggest the need for targeted guidance to optimise the effectiveness of AI tools for students with varying levels of academic proficiency.