🤖 AI Summary
This study addresses the time-intensive and laborious nature of manual test item authoring in higher education by proposing an interactive prompt engineering–based AI item generation framework leveraging ChatGPT to automatically produce personalized, high-quality assessment items and support learner performance evaluation. Methodologically, it integrates structured prompt design with a blind evaluation protocol, validated collaboratively by multiple stakeholders—faculty, subject-matter experts, and students—to ensure item quality. Empirical implementation at the Vietnam Bank Academy demonstrated a 62% average reduction in item authoring time and achieved a faculty satisfaction rating of 4.6/5.0; generated items met established psychometric standards for reliability and validity. The core contribution is the first explainable, iteratively optimizable, interactive AI-based item authoring paradigm specifically designed for higher education contexts, offering a reusable technical pathway and empirical evidence for intelligent educational assessment.
📝 Abstract
Large language models have been widely applied in many aspects of real life, bringing significant efficiency to businesses and offering distinctive user experiences. In this paper, we focus on exploring the application of ChatGPT, a chatbot based on a large language model, to support higher educator in generating quiz questions and assessing learners. Specifically, we explore interactive prompting patterns to design an optimal AI-powered question bank creation process. The generated questions are evaluated through a "Blind test" survey sent to various stakeholders including lecturers and learners. Initial results at the Banking Academy of Vietnam are relatively promising, suggesting a potential direction to streamline the time and effort involved in assessing learners at higher education institutes.