Evaluating AI-Generated Essays with GRE Analytical Writing Assessment

📅 2024-10-22
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the dual challenges posed by AI-generated text to automated scoring and academic integrity. It systematically evaluates ten state-of-the-art large language models (LLMs) on the Graduate Record Examination (GRE) Analytical Writing Assessment (AWA), the first high-stakes standardized writing test adopted as an LLM benchmark. Evaluation employs dual validation: e-rater automated scoring and multi-round, double-blind human assessment. Additionally, an AI-text detector is trained across multiple LLMs and rigorously tested for both in-distribution (same-model) and out-of-distribution (cross-model) generalization. Results show Gemini and GPT-4o achieve the highest scores (4.78 and 4.67, respectively), placing them in the upper-mid GRE AWA band; the detector attains >95% accuracy on same-model texts but exhibits substantial performance degradation under cross-model generalization—revealing a critical limitation of current detection methods. The core contributions are: (1) establishing GRE AWA as a novel, rigorous benchmark for LLM writing proficiency, and (2) uncovering generational disparities in deep analytical reasoning, logical coherence, and expressive clarity, alongside empirically defined detectability boundaries.

Technology Category

Application Category

📝 Abstract
The recent revolutionary advance in generative AI enables the generation of realistic and coherent texts by large language models (LLMs). Despite many existing evaluation metrics on the quality of the generated texts, there is still a lack of rigorous assessment of how well LLMs perform in complex and demanding writing assessments. This study examines essays generated by ten leading LLMs for the analytical writing assessment of the Graduate Record Exam (GRE). We assessed these essays using both human raters and the e-rater automated scoring engine as used in the GRE scoring pipeline. Notably, the top-performing Gemini and GPT-4o received an average score of 4.78 and 4.67, respectively, falling between"generally thoughtful, well-developed analysis of the issue and conveys meaning clearly"and"presents a competent analysis of the issue and conveys meaning with acceptable clarity"according to the GRE scoring guideline. We also evaluated the detection accuracy of these essays, with detectors trained on essays generated by the same and different LLMs.
Problem

Research questions and friction points this paper is trying to address.

Analyzing AI-generated essays' impact on automated scoring systems
Evaluating AI writing's implications for academic integrity concerns
Assessing cross-model detection feasibility for AI-generated content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking AI-generated essays using large-scale empirical data
Developing new features to capture deeper thinking in scoring
Training detectors on one model to identify other AI texts
🔎 Similar Papers
No similar papers found.