ReTrace: Interactive Visualizations for Reasoning Traces of Large Reasoning Models

๐Ÿ“… 2025-11-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large reasoning models generate verbose, intricate chains of stepwise reasoning, imposing substantial cognitive load on users. To address this, we propose ReTraceโ€”a novel system that integrates a validated reasoning taxonomy with interactive visualization techniques for the first time. ReTrace constructs structured representations of reasoning data and introduces two new interactive visualizations: (1) a path graph, mapping high-level reasoning trajectories, and (2) a step-provenance view, enabling fine-grained tracing of individual reasoning steps. These support user comprehension, learning, and error diagnosis of model โ€œthought processes.โ€ A rigorously controlled user study demonstrates that both visualizations significantly improve comprehension accuracy (+32.7%) and reduce subjective cognitive load (โˆ’41.5%) compared to raw textual reasoning traces. This work establishes a scalable, structured visualization paradigm for enhancing AI explainability.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in Large Language Models have led to Large Reasoning Models, which produce step-by-step reasoning traces. These traces offer insight into how models think and their goals, improving explainability and helping users follow the logic, learn the process, and even debug errors. These traces, however, are often verbose and complex, making them cognitively demanding to comprehend. We address this challenge with ReTrace, an interactive system that structures and visualizes textual reasoning traces to support understanding. We use a validated reasoning taxonomy to produce structured reasoning data and investigate two types of interactive visualizations thereof. In a controlled user study, both visualizations enabled users to comprehend the model's reasoning more accurately and with less perceived effort than a raw text baseline. The results of this study could have design implications for making long and complex machine-generated reasoning processes more usable and transparent, an important step in AI explainability.
Problem

Research questions and friction points this paper is trying to address.

Visualizing complex reasoning traces from large models
Reducing cognitive load of understanding verbose reasoning steps
Improving interpretability of AI reasoning processes through interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive system structures textual reasoning traces
Uses validated taxonomy to produce structured reasoning data
Investigates interactive visualizations for model reasoning comprehension
๐Ÿ”Ž Similar Papers
No similar papers found.