🤖 AI Summary
This work proposes an interpretable tutoring framework inspired by dual-process theory to address the limitations of current large language model–driven intelligent tutors, which often conflate cognitive diagnosis, affective awareness, and instructional decision-making within intuitive, single-step generation lacking deliberate adaptability. The proposed framework introduces, for the first time in AI tutoring, a structured reasoning workspace that explicitly decouples inference about the learner’s state from the selection of pedagogical actions. It integrates causal evidence parsing, fuzzy cognitive diagnosis, counterfactual stability analysis, and prospective affective reasoning, while enhancing transparency through visualizable decision pathways. Human-in-the-loop evaluations demonstrate significant improvements in personalization, emotional sensitivity, and instructional clarity, and ablation studies confirm the necessity of each component, collectively enabling reliable and explainable intelligent tutoring.
📝 Abstract
While Large Language Models (LLMs) have demonstrated remarkable fluency in educational dialogues, most generative tutors primarily operate through intuitive, single-pass generation. This reliance on fast thinking precludes a dedicated reasoning workspace, forcing multiple diagnostic and strategic signals to be processed in a conflated manner. As a result, learner cognitive diagnosis, affective perception, and pedagogical decision-making become tightly entangled, which limits the tutoring system's capacity for deliberate instructional adaptation. We propose SLOW, a theory-informed tutoring framework that supports deliberate learner-state reasoning within a transparent decision workspace. Inspired by dual-process accounts of human tutoring, SLOW explicitly separates learner-state inference from instructional action selection. The framework integrates causal evidence parsing from learner language, fuzzy cognitive diagnosis with counterfactual stability analysis, and prospective affective reasoning to anticipate how instructional choices may influence learners' emotional trajectories. These signals are jointly considered to guide pedagogically and affectively aligned tutoring strategies. Evaluation using hybrid human-AI judgments demonstrates significant improvements in personalization, emotional sensitivity, and clarity. Ablation studies further confirm the necessity of each module, showcasing how SLOW enables interpretable and reliable intelligent tutoring through a visualized decision-making process. This work advances the interpretability and educational validity of LLM-based adaptive instruction.