🤖 AI Summary
Higher education students often lack critical engagement with generative AI tools, particularly in academic writing tasks. Method: This study designs and empirically validates an integrated literature review pedagogy—“AI feedback–peer review–reflective iteration”—featuring a human-AI co-feedback mechanism. It incorporates a customized AI review system, structured peer assessment, longitudinal writing process analytics, and reflective journal analysis to trace the dynamic evolution of student–AI interaction. Contribution/Results: Qualitative findings demonstrate significant improvements in graduate students’ academic writing proficiency, disciplinary conceptual understanding, and AI literacy. Learners strategically leverage AI feedback while developing metacognitive awareness of its limitations. The study formalizes a transferable, scalable human–AI collaborative teaching model, offering both theoretical grounding and an actionable implementation framework for deep, pedagogically sound integration of generative AI in higher education.
📝 Abstract
The growing integration of generative AI in higher education is transforming how students write, learn, and engage with knowledge. As AI tools become more integrated into classrooms, there is an urgent need for pedagogical approaches that help students use them critically and reflectively. This study proposes a pedagogical design that integrates AI and peer feedback in a graduate-level academic writing activity. Over eight weeks, students developed literature review projects through multiple writing and revision stages, receiving feedback from both a custom-built AI reviewer and human peers. We examine two questions: (1) How did students interact with and incorporate AI and peer feedback during the writing process? and (2) How did they reflect on and build relationships with both human and AI reviewers? Data sources include student writing artifacts, AI and peer feedback, AI chat logs, and student reflections. Findings show that students engaged differently with each feedback source-relying on AI for rubric alignment and surface-level edits, and on peer feedback for conceptual development and disciplinary relevance. Reflections revealed evolving relationships with AI, characterized by increasing confidence, strategic use, and critical awareness of its limitations. The pedagogical design supported writing development, AI literacy, and disciplinary understanding. This study offers a scalable pedagogical model for integrating AI into writing instruction and contributes insights for system-level approaches to fostering meaningful human-AI collaboration in higher education.