Disagreement as Data: Reasoning Trace Analytics in Multi-Agent Systems

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of effectively leveraging reasoning traces generated by large language models in multi-agent systems for qualitative coding. Treating multi-agent reasoning trajectories as a novel form of process data, it proposes a new paradigm that reconceptualizes semantic divergence as a valuable analytical signal. By quantifying inter-agent reasoning consistency and disagreement through cosine similarity, and integrating these quantitative metrics with human review, the approach establishes a human-AI collaborative framework for coding refinement. In experiments involving nearly 10,000 tutoring dialogues, semantic reasoning similarity significantly distinguished between consensus and disagreement, showed strong correlation with human inter-coder reliability, revealed sub-functional dimensions of codes, and facilitated iterative codebook optimization—thereby enhancing both methodological rigor and interpretive depth in educational research.

Technology Category

Application Category

📝 Abstract
Learning analytics researchers often analyze qualitative student data such as coded annotations or interview transcripts to understand learning processes. With the rise of generative AI, fully automated and human-AI workflows have emerged as promising methods for analysis. However, methodological standards to guide such workflows remain limited. In this study, we propose that reasoning traces generated by large language model (LLM) agents, especially within multi-agent systems, constitute a novel and rich form of process data to enhance interpretive practices in qualitative coding. We apply cosine similarity to LLM reasoning traces to systematically detect, quantify, and interpret disagreements among agents, reframing disagreement as a meaningful analytic signal. Analyzing nearly 10,000 instances of agent pairs coding human tutoring dialog segments, we show that LLM agents'semantic reasoning similarity robustly differentiates consensus from disagreement and correlates with human coding reliability. Qualitative analysis guided by this metric reveals nuanced instructional sub-functions within codes and opportunities for conceptual codebook refinement. By integrating quantitative similarity metrics with qualitative review, our method has the potential to improve and accelerate establishing inter-rater reliability during coding by surfacing interpretive ambiguity, especially when LLMs collaborate with humans. We discuss how reasoning-trace disagreements represent a valuable new class of analytic signals advancing methodological rigor and interpretive depth in educational research.
Problem

Research questions and friction points this paper is trying to address.

disagreement
reasoning trace
multi-agent systems
qualitative coding
learning analytics
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning traces
multi-agent systems
cosine similarity
disagreement as data
qualitative coding
E
Elham Tajik
University at Albany
Conrad Borchers
Conrad Borchers
Carnegie Mellon University
Educational Data MiningLearning AnalyticsIntelligent Tutoring SystemsSelf-Regulated Learning
B
Bahar Shahrokhian
Arizona State University
S
Sebastian Simon
Le Mans University
A
Ali Keramati
University of California, Irvine
S
Sonika Pal
Indian Institute of Technology Bombay
Sreecharan Sankaranarayanan
Sreecharan Sankaranarayanan
Amazon, Carnegie Mellon University
Artificial Intelligence in EducationLLMsMulti-Agent SystemsConversational Agents