🤖 AI Summary
This work proposes a hybrid conversational agent that integrates the contextual responsiveness of large language models (LLMs) within a rule-based framework grounded in self-regulated learning theory, addressing limitations of existing dialogue systems that either rely on rigid rules—failing to adapt to learners’ dynamic engagement—or leverage LLMs without sufficient educational grounding, thereby constraining reflective guidance. Deployed in a culturally responsive robotic summer camp, the agent dynamically fosters deep reflection among learners. Reflection quality is evaluated through dialogic topic analysis, demonstrating that the approach enhances conversational flexibility while preserving theoretical rigor, effectively eliciting rich learner reflections on goals and activities. Nevertheless, challenges persist due to prompt repetition and alignment discrepancies, which occasionally hinder participant engagement.
📝 Abstract
Dialogue systems have long supported learner reflections, with theoretically grounded, rule-based designs offering structured scaffolding but often struggling to respond to shifts in engagement. Large Language Models (LLMs), in contrast, can generate context-sensitive responses but are not informed by decades of research on how learning interactions should be structured, raising questions about their alignment with pedagogical theories. This paper presents a hybrid dialogue system that embeds LLM responsiveness within a theory-aligned, rule-based framework to support learner reflections in a culturally responsive robotics summer camp. The rule-based structure grounds dialogue in self-regulated learning theory, while the LLM decides when and how to prompt deeper reflections, responding to evolving conversation context. We analyze themes across dialogues to explore how our hybrid system shaped learner reflections. Our findings indicate that LLM-embedded dialogues supported richer learner reflections on goals and activities, but also introduced challenges due to repetitiveness and misalignment in prompts, reducing engagement.