Leveraging Peer, Self, and Teacher Assessments for Generative AI-Enhanced Feedback

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In higher education, oral presentation feedback for large-scale engineering courses faces persistent challenges in timeliness, depth, and scalability. To address this, we propose AICoFe—a novel human-AI collaborative feedback system that integrates instructor, peer, and self-assessment within a validated multi-source framework. AICoFe employs rubric-driven differential analysis, weighted input fusion, bias detection algorithms, and a context-aware generative AI model to produce transparent, pedagogically grounded, and scalable feedback. Through an empirical study (N=46), we first document systematic scoring discrepancies and complementary evaluative perspectives across the three assessment sources. Results demonstrate that our generative AI paradigm significantly enhances feedback specificity and instructional effectiveness. AICoFe thus establishes a new pathway for formative assessment—balancing scalability with pedagogical rigor—while ensuring transparency and trustworthiness in AI-augmented educational practice.

Technology Category

Application Category

📝 Abstract
Providing timely and meaningful feedback remains a persistent challenge in higher education, especially in large courses where teachers must balance formative depth with scalability. Recent advances in Generative Artificial Intelligence (GenAI) offer new opportunities to support feedback processes while maintaining human oversight. This paper presents an study conducted within the AICoFe (AI-based Collaborative Feedback) system, which integrates teacher, peer, and self-assessments of engineering students' oral presentations. Using a validated rubric, 46 evaluation sets were analyzed to examine agreement, correlation, and bias across evaluators. The analyses revealed consistent overall alignment among sources but also systematic variations in scoring behavior, reflecting distinct evaluative perspectives. These findings informed the proposal of an enhanced GenAI model within AICoFe system, designed to integrate human assessments through weighted input aggregation, bias detection, and context-aware feedback generation. The study contributes empirical evidence and design principles for developing GenAI-based feedback systems that combine data-based efficiency with pedagogical validity and transparency.
Problem

Research questions and friction points this paper is trying to address.

Addresses feedback scalability in large higher education courses.
Integrates teacher, peer, and self-assessments for oral presentations.
Proposes a GenAI model to enhance feedback with bias detection.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates peer, self, and teacher assessments for feedback generation
Uses weighted input aggregation and bias detection in AI model
Generates context-aware feedback with pedagogical validity and transparency
🔎 Similar Papers
No similar papers found.