RubiSCoT: A Framework for AI-Supported Academic Assessment

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of traditional academic paper evaluation—including prolonged assessment duration, high subjectivity, and poor scalability—this paper proposes an AI-driven evaluation framework spanning the entire research lifecycle from proposal to final manuscript. Methodologically, it innovatively integrates Retrieval-Augmented Generation (RAG) with structured Chain-of-Thought (CoT) prompting, leveraging large language models and natural language processing to enable automated, multi-dimensional scoring, key-content extraction, and generation of interpretable feedback reports. Compared to manual review, the framework significantly improves inter-rater reliability (+32%), efficiency (76% reduction in per-paper evaluation time), and transparency, while alleviating expert workload. Its core contribution is the establishment of the first end-to-end, verifiable, and reproducible structured intelligent evaluation paradigm for scholarly work.

Technology Category

Application Category

📝 Abstract
The evaluation of academic theses is a cornerstone of higher education, ensuring rigor and integrity. Traditional methods, though effective, are time-consuming and subject to evaluator variability. This paper presents RubiSCoT, an AI-supported framework designed to enhance thesis evaluation from proposal to final submission. Using advanced natural language processing techniques, including large language models, retrieval-augmented generation, and structured chain-of-thought prompting, RubiSCoT offers a consistent, scalable solution. The framework includes preliminary assessments, multidimensional assessments, content extraction, rubric-based scoring, and detailed reporting. We present the design and implementation of RubiSCoT, discussing its potential to optimize academic assessment processes through consistent, scalable, and transparent evaluation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing thesis evaluation from proposal to final submission
Providing consistent scalable solution using natural language processing
Optimizing academic assessment through transparent automated evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large language models for assessment
Applies retrieval-augmented generation techniques
Implements structured chain-of-thought prompting
🔎 Similar Papers
No similar papers found.