Lightweight Prompt Engineering for Cognitive Alignment in Educational AI: A OneClickQuiz Case Study

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the misalignment between AI-generated educational content—such as quiz items produced by the OneClickQuiz Moodle plugin—and Bloom’s taxonomy cognitive levels (Remembering, Applying, Analyzing). We propose a lightweight yet high-precision prompt engineering strategy that departs from conventional concise or role-based prompts. Specifically, we design three structured prompt variants explicitly embedding cognitive objectives, action verbs, and answer constraints. Using an automated classification model augmented by expert human evaluation, we systematically assess the cognitive-level fidelity of generated questions. Experimental results demonstrate that such detailed prompts significantly improve alignment accuracy across all targeted levels—particularly in Applying and Analyzing—without requiring model fine-tuning or additional training. This work contributes a reusable, empirically validated prompt design paradigm for low-cost, highly controllable AI-generated educational content.

Technology Category

Application Category

📝 Abstract
The rapid integration of Artificial Intelligence (AI) into educational technology promises to revolutionize content creation and assessment. However, the quality and pedagogical alignment of AI-generated content remain critical challenges. This paper investigates the impact of lightweight prompt engineering strategies on the cognitive alignment of AI-generated questions within OneClickQuiz, a Moodle plugin leveraging generative AI. We evaluate three prompt variants-a detailed baseline, a simpler version, and a persona-based approach-across Knowledge, Application, and Analysis levels of Bloom's Taxonomy. Utilizing an automated classification model (from prior work) and human review, our findings demonstrate that explicit, detailed prompts are crucial for precise cognitive alignment. While simpler and persona-based prompts yield clear and relevant questions, they frequently misalign with intended Bloom's levels, generating outputs that are either too complex or deviate from the desired cognitive objective. This study underscores the importance of strategic prompt engineering in fostering pedagogically sound AI-driven educational solutions and advises on optimizing AI for quality content generation in learning analytics and smart learning environments.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompt engineering for AI-generated educational content alignment
Ensuring pedagogical quality of AI questions with Bloom's Taxonomy
Addressing cognitive misalignment in lightweight AI prompt strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight prompt engineering for cognitive alignment
Evaluated three prompt variants across Bloom's Taxonomy
Explicit detailed prompts ensure precise cognitive alignment
🔎 Similar Papers
No similar papers found.
Antoun Yaacoub
Antoun Yaacoub
Associate Professor in AI
Artificial IntelligenceNeural networksMachine learningDeep learning
J
Jérôme Da-Rugna
Learning, Data and Robotics (LDR), esieaLab ESIEA, 9 Rue Vésale, 75005 Paris, France
Z
Zainab Assaghir
Faculty of Science, Lebanese University, Rafic Hariri University Campus, Beirut, Lebanon