🤖 AI Summary
To address time consumption, reading fatigue, and analytical challenges faced by STEM researchers when processing peer review feedback, this paper proposes a contextualized interactive support paradigm and develops a context-aware, visualization-guided system for reviewing feedback comprehension. Methodologically, it integrates user interviews, storyboard-based narrative design, and human-computer interaction principles—overcoming limitations of existing frameworks, including insufficient domain-specific theoretical guidance and the lack of domain-aware NLP tools. A controlled experiment (N=31) demonstrates significant improvements over conventional approaches in comprehension efficiency, integration accuracy, and cognitive load reduction. Field deployment (N=6) further validates its effectiveness and high usability in real-world academic workflows. The core contribution is the first application of contextualized interaction to scholarly review feedback processing, establishing a scalable, interpretable, and context-rich human-AI collaboration paradigm.
📝 Abstract
Effectively assimilating and integrating reviewer feedback is crucial for researchers seeking to refine their papers and handle potential rebuttal phases in academic venues. However, traditional review digestion processes present challenges such as time consumption, reading fatigue, and the requisite for comprehensive analytical skills. Prior research on review analysis often provides theoretical guidance with limited targeted support. Additionally, general text comprehension tools overlook the intricate nature of comprehensively understanding reviews and lack contextual assistance. To bridge this gap, we formulated research questions to explore the authors' concerns and methods for enhancing comprehension during the review digestion phase. Through interviews and the creation of storyboards, we developed ReviseMate, an interactive system designed to address the identified challenges. A controlled user study (N=31) demonstrated the superiority of ReviseMate over baseline methods, with positive feedback regarding user interaction. Subsequent field deployment (N=6) further validated the effectiveness of ReviseMate in real-world review digestion scenarios. These findings underscore the potential of interactive tools to significantly enhance the assimilation and integration of reviewer feedback during the manuscript review process.