🤖 AI Summary
To address the challenges of generating high-quality, curriculum-aligned, and cognitively diverse practice problems for advanced mathematics courses, this paper proposes an example-guided, content-aligned framework tailored to educational contexts. Leveraging open large language models, the method integrates few-shot prompting, course-text injection, and a multi-dimensional quality evaluation feedback loop to jointly optimize generated problems for topic relevance, controllable difficulty, and Bloom’s taxonomy progression—from recall to reasoning. Experiments demonstrate significant improvements in problem accuracy, pedagogical appropriateness, and cognitive challenge, enabling multi-level mathematical competency training. The key innovation lies in explicitly embedding course knowledge structures into the generation pipeline and establishing a reusable educational alignment paradigm. This provides educators with a lightweight, practical AI-augmented lesson preparation solution. (149 words)
📝 Abstract
Educators have started to turn to Generative AI (GenAI) to help create new course content, but little is known about how they should do so. In this project, we investigated the first steps for optimizing content creation for advanced math. In particular, we looked at the ability of GenAI to produce high-quality practice problems that are relevant to the course content. We conducted two studies to: (1) explore the capabilities of current versions of publicly available GenAI and (2) develop an improved framework to address the limitations we found. Our results showed that GenAI can create math problems at various levels of quality with minimal support, but that providing examples and relevant content results in better quality outputs. This research can help educators decide the ideal way to adopt GenAI in their workflows, to create more effective educational experiences for students.