🤖 AI Summary
To address the limitations of traditional academic paper evaluation—including prolonged assessment duration, high subjectivity, and poor scalability—this paper proposes an AI-driven evaluation framework spanning the entire research lifecycle from proposal to final manuscript. Methodologically, it innovatively integrates Retrieval-Augmented Generation (RAG) with structured Chain-of-Thought (CoT) prompting, leveraging large language models and natural language processing to enable automated, multi-dimensional scoring, key-content extraction, and generation of interpretable feedback reports. Compared to manual review, the framework significantly improves inter-rater reliability (+32%), efficiency (76% reduction in per-paper evaluation time), and transparency, while alleviating expert workload. Its core contribution is the establishment of the first end-to-end, verifiable, and reproducible structured intelligent evaluation paradigm for scholarly work.
📝 Abstract
The evaluation of academic theses is a cornerstone of higher education, ensuring rigor and integrity. Traditional methods, though effective, are time-consuming and subject to evaluator variability. This paper presents RubiSCoT, an AI-supported framework designed to enhance thesis evaluation from proposal to final submission. Using advanced natural language processing techniques, including large language models, retrieval-augmented generation, and structured chain-of-thought prompting, RubiSCoT offers a consistent, scalable solution. The framework includes preliminary assessments, multidimensional assessments, content extraction, rubric-based scoring, and detailed reporting. We present the design and implementation of RubiSCoT, discussing its potential to optimize academic assessment processes through consistent, scalable, and transparent evaluation.