๐ค AI Summary
Large reasoning models generate verbose, intricate chains of stepwise reasoning, imposing substantial cognitive load on users. To address this, we propose ReTraceโa novel system that integrates a validated reasoning taxonomy with interactive visualization techniques for the first time. ReTrace constructs structured representations of reasoning data and introduces two new interactive visualizations: (1) a path graph, mapping high-level reasoning trajectories, and (2) a step-provenance view, enabling fine-grained tracing of individual reasoning steps. These support user comprehension, learning, and error diagnosis of model โthought processes.โ A rigorously controlled user study demonstrates that both visualizations significantly improve comprehension accuracy (+32.7%) and reduce subjective cognitive load (โ41.5%) compared to raw textual reasoning traces. This work establishes a scalable, structured visualization paradigm for enhancing AI explainability.
๐ Abstract
Recent advances in Large Language Models have led to Large Reasoning Models, which produce step-by-step reasoning traces. These traces offer insight into how models think and their goals, improving explainability and helping users follow the logic, learn the process, and even debug errors. These traces, however, are often verbose and complex, making them cognitively demanding to comprehend. We address this challenge with ReTrace, an interactive system that structures and visualizes textual reasoning traces to support understanding. We use a validated reasoning taxonomy to produce structured reasoning data and investigate two types of interactive visualizations thereof. In a controlled user study, both visualizations enabled users to comprehend the model's reasoning more accurately and with less perceived effort than a raw text baseline. The results of this study could have design implications for making long and complex machine-generated reasoning processes more usable and transparent, an important step in AI explainability.