๐ค AI Summary
This study addresses the lack of structured pedagogical guidance in large language models for math tutoring by systematically integrating Pรณlyaโs four-step problem-solving framework into the instruction fine-tuning of Llama-3.1-8B. The authors construct a metacognitively guided dialogue system through synthetically generated GSM8K data aligned with this framework. Their approach encompasses pedagogically aligned data generation, multi-variant instruction fine-tuning (general, mathematical, and pedagogical), and a hybrid evaluation protocol. Experimental results demonstrate that the resulting model exhibits a more balanced distribution across problem-solving stages and reduces premature answering. Expert evaluations further confirm significant improvements in instructional coherence and metacognitive prompting. This work achieves a deep integration of educational theory and AI reasoning, enhancing both educational alignment and reasoning transparency in AI-powered tutoring.
๐ Abstract
This paper introduces Llama-Polya, an instruction-tuned large language model that integrates Polya's four-step problem-solving framework into its dialogue structure to support mathematical reasoning. Mathematical problem-solving is central to students'success in mathematics education, yet many learners struggle to plan, justify, and verify their solutions. Although large language models (LLMs) show promise as intelligent tutors, they often lack structured pedagogical alignment grounded in established learning theories. To address this gap, we operationalize Polya's problem-solving framework within an instruction-tuned LLM to promote metacognitive engagement and examine the effects of pedagogy-aligned fine-tuning compared to domain-only and general-purpose instruction tuning. Built on the Llama-3.1-8B architecture, Llama-Polya was fine-tuned on synthetic math problem-solving data derived from GSM8K, structured according to Polya's four stages. We developed and evaluated multiple variants-general-purpose instruct, math-domain metamath, pedagogy-aligned polya-v2, and sequential metamath+polya-v2-using both quantitative accuracy metrics and qualitative pedagogical assessments. Results indicate that models tuned with Polya's framework and domain-specific data produced more balanced reasoning-stage distributions and fewer premature answers. Expert evaluators also observed improved pedagogical coherence and metacognitive prompting, although limitations in personalization and mathematical rigor remained. These findings suggest that pedagogy-grounded instruction tuning can enhance educational alignment and reasoning transparency in LLM-based tutoring systems.