🤖 AI Summary
This study addresses the lack of dynamic learning monitoring and personalized feedback in electronic textbooks by designing and implementing MetaCQ—a self-adaptive learning environment integrating an Intelligent Tutoring System (ITS) with an Open Learner Model (OLM). Leveraging LLM-powered chatbots, MetaCQ dynamically generates contextually appropriate multiple-choice questions aligned with learners’ real-time progress. It employs multi-dimensional learner modeling and three adaptive feedback mechanisms—explanatory, scaffolding, and metacognitive-guidance—to foster self-regulated learning and optimized help-seeking behavior. Think-aloud protocols validated item relevance and difficulty, while embedded metacognitive scaffolds enhanced reflective practice. Experimental results demonstrate that MetaCQ effectively generates well-calibrated assessments and delivers personalized feedback, significantly improving learning process transparency and self-monitoring capability. However, comparative empirical evidence on the relative efficacy of the three feedback types remains limited and warrants larger-scale validation.
📝 Abstract
This study has proposed an E-textbook platform, MetaCQ, which integrates ITS and OLM to enable users to monitor their study progress. The platform adopts a chatbot to generate MCQs and manage learners' study data and their learning model. Additionally, it regulates help-seeking behaviour and provides immediate feedback tailored to users' learning processes. Three adaptive feedback methods have been implemented to construct chatbots, examining the MCQs' relevancy and difficulty through the ThinkAloud study to evaluate the most effective method of measuring the user's study performance. However, no valid result demonstrates which method can significantly assess learners' study outcomes based on the current experiment, which requires further studies to improve it.