Unpacking Graduate Students' Learning Experience with Generative AI Teaching Assistant in A Quantitative Methodology Course

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the pedagogical impact of generative AI teaching assistants in graduate-level advanced quantitative methods courses and examines heterogeneity in student AI usage. Method: Drawing on question logs, surveys, and in-depth interviews from 20 students, we employed Bloom’s Taxonomy and the CLEAR framework for qualitative coding, complemented by t-tests and Poisson regression for quantitative analysis. Contribution/Results: We identify a novel U-shaped distribution in AI query frequency—students with weaker mathematical foundations ask more frequent but less logically structured questions, predominantly at the knowledge/comprehension levels; those with stronger foundations ask fewer but deeper, higher-order questions. This pattern exhibits systematic associations with cognitive taxonomy levels and disciplinary preparedness. We further pinpoint critical intervention windows and propose a tiered scaffolding strategy. The findings provide empirical evidence and actionable guidelines for leveraging AI to support differentiated instruction in quantitative education.

Technology Category

Application Category

📝 Abstract
The study was conducted in an Advanced Quantitative Research Methods course involving 20 graduate students. During the course, student inquiries made to the AI were recorded and coded using Bloom's taxonomy and the CLEAR framework. A series of independent sample t-tests and poisson regression analyses were employed to analyse the characteristics of different questions asked by students with different backgrounds. Post course interviews were conducted with 10 students to gain deeper insights into their perceptions. The findings revealed a U-shaped pattern in students' use of the AI assistant, with higher usage at the beginning and towards the end of the course, and a decrease in usage during the middle weeks. Most questions posed to the AI focused on knowledge and comprehension levels, with fewer questions involving deeper cognitive thinking. Students with a weaker mathematical foundation used the AI assistant more frequently, though their inquiries tended to lack explicit and logical structure compared to those with a strong mathematical foundation, who engaged less with the tool. These patterns suggest the need for targeted guidance to optimise the effectiveness of AI tools for students with varying levels of academic proficiency.
Problem

Research questions and friction points this paper is trying to address.

Analyzes graduate students' usage patterns of AI teaching assistants
Examines cognitive levels of student inquiries using Bloom's taxonomy
Investigates academic proficiency impact on AI tool effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used Bloom's taxonomy for question coding
Applied t-tests and poisson regression analyses
Analyzed AI usage patterns via student interviews
🔎 Similar Papers
No similar papers found.
Zhanxin Hao
Zhanxin Hao
School of Education, Tsinghua University
AI in EducationEducational Assessment
Haifeng Luo
Haifeng Luo
University of Edinburgh
Wireless communicationsIn-band full-duplex
Y
Yongyi Chen
Institute of Education, Tsinghua University
Y
Yu Zhang
Institute of Education, Tsinghua University