Exploring the Role of Tracing in AI-Supported Planning for Algorithmic Reasoning

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic investigation into how integrating execution traces in AI-assisted programming influences learners’ algorithmic reasoning. Through a controlled experiment, it uniquely combines trace-based planning—common in traditional pen-and-paper settings—with large language model (LLM)-driven AI feedback, comparing explicit execution traces against purely natural language planning in terms of cognitive patterns and interaction behaviors. Findings indicate that exposure to execution traces shifts learners from line-by-line description toward goal-oriented reasoning about program behavior, leading to more consistent partially correct solutions. Although this approach did not significantly improve final coding performance or the quality of LLM feedback, it reveals the distinctive role of multimodal coordination—among natural language, execution traces, and code—in supporting algorithmic reasoning, offering novel insights for the design of AI-powered programming tools.

Technology Category

Application Category

📝 Abstract
AI-powered planning tools show promise in supporting programming learners by enabling early, formative feedback on their thinking processes prior to coding. To date, however, most AI-supported planning tools rely on students'natural-language explanations, using LLMs to interpret learners'descriptions of their algorithmic intent. Prior to the emergence of LLM-based systems, CS education research extensively studied trace-based planning in pen-and-paper settings, demonstrating that reasoning through stepwise execution with explicit state transitions helps learners build and refine mental models of program behavior. Despite its potential, little is known about how tracing interacts with AI-mediated feedback and whether integrating tracing into AI-supported planning tools leads to different learning processes or interaction dynamics compared to natural-language-based planning alone. We study how requiring learners to produce explicit execution traces with an AI-supported planning tool affects their algorithmic reasoning. In a between-subjects study with 20 students, tracing shifted learners away from code-like, line-by-line descriptions toward more goal-driven reasoning about program behavior. Moreover, it led to more consistent partially correct solutions, although final coding performance remained comparable across conditions. Notably, tracing did not significantly affect the quality or reliability of LLM-generated feedback. These findings reveal tradeoffs in combining tracing with AI-supported planning and inform design guidelines for integrating natural language, tracing, and coding to support iterative reasoning throughout the programming process.
Problem

Research questions and friction points this paper is trying to address.

tracing
AI-supported planning
algorithmic reasoning
programming education
execution traces
Innovation

Methods, ideas, or system contributions that make the work stand out.

tracing
AI-supported planning
algorithmic reasoning
execution trace
LLM feedback
🔎 Similar Papers
No similar papers found.