🤖 AI Summary
Early-career researchers often struggle to develop critical reading skills due to limited access to diverse peer feedback. This study introduces an “in-situ agent perspective exchange” mechanism, implementing a large language model–based real-time interactive interface that supports viewpoint generation, comparative analysis, and reflective synthesis under both single- and multi-agent conditions. Its primary contribution lies in the first systematic investigation of AI agents as dialectical collaborators in academic reading, validated through in-situ annotation design, multi-agent dialogue simulation, and qualitative coding analysis. Experimental results demonstrate that agent-supported reading significantly improves critical thinking scores relative to the no-agent baseline (p < 0.01). Moreover, the multi-agent condition fosters greater user engagement in comparing, questioning, and integrating heterogeneous perspectives, yielding a 37% increase in reflection depth.
📝 Abstract
Critical reading is a primary way through which researchers develop their critical thinking skills. While exchanging thoughts and opinions with peers can strengthen critical reading, junior researchers often lack access to peers who can offer diverse perspectives. To address this gap, we designed an in-situ thought exchange interface informed by peer feedback from a formative study (N=8) to support junior researchers' critical paper reading. We evaluated the effects of thought exchanges under three conditions (no-agent, single-agent, and multi-agent) with 46 junior researchers over two weeks. Our results showed that incorporating agent-mediated thought exchanges during paper reading significantly improved participants' critical thinking scores compared to the no-agent condition. In the single-agent condition, participants more frequently made reflective annotations on the paper content. In the multi-agent condition, participants engaged more actively with agents' responses. Our qualitative analysis further revealed that participants compared and analyzed multiple perspectives in the multi-agent condition. This work contributes to understanding in-situ AI-based support for critical paper reading through thought exchanges and offers design implications for future research.