🤖 AI Summary
This study investigates how transparency in source attribution within conversational AI interfaces influences users’ information acquisition, trust formation, and critical evaluation. Through a between-subjects experiment, the authors compare four interface designs—collapsible citations, hover cards, footer references, and aligned sidebars—across varying citation densities, integrating fine-grained behavioral analysis with an automated assessment of critical thinking. Findings reveal that hover cards facilitate immediate verification, whereas aligned sidebars significantly enhance users’ information integration and critical reasoning under high citation density. The work uncovers a fundamental trade-off between interactional fluency and reflective verification, offering the first empirical evidence to guide the design of responsible conversational AI systems that support both usability and epistemic vigilance.
📝 Abstract
Conversational AI systems increasingly function as primary interfaces for information seeking, yet how they present sources to support information evaluation remains under-explored. This paper investigates how source transparency design shapes interactive information seeking, trust, and critical engagement. We conducted a controlled between-subjects experiment (N=372) comparing four source presentation interfaces - Collapsible, Hover Card, Footer, and Aligned Sidebar - varying in visibility and accessibility. Using fine-grained behavioral analysis and automated critical thinking assessment, we found that interface design fundamentally alters exploration strategies and evidence integration. While the Hover Card interface facilitated seamless, on-demand verification during the task, the Aligned Sidebar uniquely mitigated the negative effects of information overload: as citation density increased, Sidebar users demonstrated significantly higher critical thinking and synthesis scores compared to other conditions. Our results highlight a trade-off between designs that support workflow fluency and those that enforce reflective verification, offering practical implications for designing adaptive and responsible conversational AI that fosters critical engagement with AI generated content.