🤖 AI Summary
This study addresses the challenge of effectively leveraging reasoning traces generated by large language models in multi-agent systems for qualitative coding. Treating multi-agent reasoning trajectories as a novel form of process data, it proposes a new paradigm that reconceptualizes semantic divergence as a valuable analytical signal. By quantifying inter-agent reasoning consistency and disagreement through cosine similarity, and integrating these quantitative metrics with human review, the approach establishes a human-AI collaborative framework for coding refinement. In experiments involving nearly 10,000 tutoring dialogues, semantic reasoning similarity significantly distinguished between consensus and disagreement, showed strong correlation with human inter-coder reliability, revealed sub-functional dimensions of codes, and facilitated iterative codebook optimization—thereby enhancing both methodological rigor and interpretive depth in educational research.
📝 Abstract
Learning analytics researchers often analyze qualitative student data such as coded annotations or interview transcripts to understand learning processes. With the rise of generative AI, fully automated and human-AI workflows have emerged as promising methods for analysis. However, methodological standards to guide such workflows remain limited. In this study, we propose that reasoning traces generated by large language model (LLM) agents, especially within multi-agent systems, constitute a novel and rich form of process data to enhance interpretive practices in qualitative coding. We apply cosine similarity to LLM reasoning traces to systematically detect, quantify, and interpret disagreements among agents, reframing disagreement as a meaningful analytic signal. Analyzing nearly 10,000 instances of agent pairs coding human tutoring dialog segments, we show that LLM agents'semantic reasoning similarity robustly differentiates consensus from disagreement and correlates with human coding reliability. Qualitative analysis guided by this metric reveals nuanced instructional sub-functions within codes and opportunities for conceptual codebook refinement. By integrating quantitative similarity metrics with qualitative review, our method has the potential to improve and accelerate establishing inter-rater reliability during coding by surfacing interpretive ambiguity, especially when LLMs collaborate with humans. We discuss how reasoning-trace disagreements represent a valuable new class of analytic signals advancing methodological rigor and interpretive depth in educational research.