Modelling and Classifying the Components of a Literature Review

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated identification of scientific text rhetorical roles (e.g., research gaps, methodological extensions) remains challenging in literature review generation. Method: We propose the first fine-grained annotation framework tailored for review generation and introduce Sci-Sentence—a multidisciplinary, sentence-level benchmark dataset. Our approach integrates expert annotations with semi-synthetic data generated by 37 large language models (LLMs), employing a hybrid zero-shot learning and supervised fine-tuning strategy. Contribution/Results: Experiments show that fine-tuned models achieve >96% F1; GPT-4o attains peak performance, while several lightweight open-source models approach state-of-the-art closed-source models under high-quality data supervision. LLM-augmented data significantly boosts small-model performance. This work establishes a reproducible annotation paradigm and technical baseline for scientific text understanding and automated review generation.

Technology Category

Application Category

📝 Abstract
Previous work has demonstrated that AI methods for analysing scientific literature benefit significantly from annotating sentences in papers according to their rhetorical roles, such as research gaps, results, limitations, extensions of existing methodologies, and others. Such representations also have the potential to support the development of a new generation of systems capable of producing high-quality literature reviews. However, achieving this goal requires the definition of a relevant annotation schema and effective strategies for large-scale annotation of the literature. This paper addresses these challenges by 1) introducing a novel annotation schema specifically designed to support literature review generation and 2) conducting a comprehensive evaluation of a wide range of state-of-the-art large language models (LLMs) in classifying rhetorical roles according to this schema. To this end, we also present Sci-Sentence, a novel multidisciplinary benchmark comprising 700 sentences manually annotated by domain experts and 2,240 sentences automatically labelled using LLMs. We evaluate 37 LLMs on this benchmark, spanning diverse model families and sizes, using both zero-shot learning and fine-tuning approaches. The experiments yield several novel insights that advance the state of the art in this challenging domain. First, the current generation of LLMs performs remarkably well on this task when fine-tuned on high-quality data, achieving performance levels above 96% F1. Second, while large proprietary models like GPT-4o achieve the best results, some lightweight open-source alternatives also demonstrate excellent performance. Finally, enriching the training data with semi-synthetic examples generated by LLMs proves beneficial, enabling small encoders to achieve robust results and significantly enhancing the performance of several open decoder models.
Problem

Research questions and friction points this paper is trying to address.

Defining annotation schema for literature review generation
Evaluating LLMs in classifying rhetorical roles
Creating benchmark dataset for multidisciplinary sentences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel annotation schema for literature reviews
Comprehensive evaluation of 37 LLMs
Semi-synthetic data enhances model performance
🔎 Similar Papers
No similar papers found.
F
Francisco Bolaños
Knowledge Media Institute, The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK
A
Angelo Salatino
Knowledge Media Institute, The Open University, Walton Hall, Milton Keynes, MK7 6AA, UK
Francesco Osborne
Francesco Osborne
KMi, The Open University
Science of ScienceInformation ExtractionKnowledge GraphsArtificial IntelligenceSemantic Web
Enrico Motta
Enrico Motta
Professor of Knowledge Technologies, KMi, The Open University
Semantic WebOntology EngineeringKnowledge SystemsData Science