🤖 AI Summary
Existing research struggles to investigate scientific writing revision behaviors and evaluate the performance of large language models (LLMs) in scientific writing assistance due to a lack of publicly available early-stage revision data. This work addresses this gap by systematically extracting revision traces from authors’ draft stages in arXiv LaTeX source files. By aligning annotated draft text with final published paragraphs, the study constructs revision pairs and employs LLM-based filtering followed by human validation to curate a high-quality, large-scale paragraph-level revision dataset. From 1.28 million candidate pairs, the authors obtain 578,000 authentic revision instances and introduce the first benchmark for detecting revisions in the scientific drafting process, offering a valuable resource for dynamic writing research and LLM-assisted editing.
📝 Abstract
Scientific writing is an iterative process that generates rich revision traces, yet publicly available resources typically expose only final or near-final versions of papers. This limits empirical study of revision behaviour and evaluation of large language models (LLMs) for scientific writing. We introduce EarlySciRev, a dataset of early-stage scientific text revisions automatically extracted from arXiv LaTeX source files. Our key observation is that commented-out text in LaTeX often preserves discarded or alternative formulations written by the authors themselves. By aligning commented segments with nearby final text, we extract paragraph-level candidate revision pairs and apply LLM-based filtering to retain genuine revisions. Starting from 1.28M candidate pairs, our pipeline yields 578k validated revision pairs, grounded in authentic early drafting traces. We additionally provide a human-annotated benchmark for revision detection. EarlySciRev complements existing resources focused on late-stage revisions or synthetic rewrites and supports research on scientific writing dynamics, revision modelling, and LLM-assisted editing.