Sci2Pol: Evaluating and Fine-tuning LLMs on Scientific-to-Policy Brief Generation

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack dedicated evaluation benchmarks and training resources for scientific paper-to-policy brief generation. Method: We introduce Sci2Pol-Bench—the first benchmark for science-to-policy translation—and Sci2Pol-Corpus, a high-quality training dataset. We propose a five-stage capability evaluation framework that exposes the inadequacy of conventional metrics (e.g., ROUGE) and design an LLM-driven expert-aligned evaluation method. Using a hybrid assessment framework integrating BERTScore, ROUGE, and LLM-as-a-judge—augmented with citation-link verification and context-aware optimization—we perform supervised fine-tuning on LLaMA and Gemma models. Contribution/Results: Fine-tuned Gemma-27B significantly outperforms GPT-4o and DeepSeek-V3 on policy brief generation, demonstrating the effectiveness and transferable value of our dataset and evaluation paradigm in bridging the science–policy gap.

Technology Category

Application Category

📝 Abstract
We propose Sci2Pol-Bench and Sci2Pol-Corpus, the first benchmark and training dataset for evaluating and fine-tuning large language models (LLMs) on policy brief generation from a scientific paper. We build Sci2Pol-Bench on a five-stage taxonomy to mirror the human writing process: (i) Autocompletion, (ii) Understanding, (iii) Summarization, (iv) Generation, and (v) Verification. It features 18 tasks in multiple-choice and open-ended formats. Specifically, for the Generation stage, we show that BERTScore and ROUGE scores fail to capture the quality of brief writing, and introduce a new LLM-based evaluation metric aligned with expert judgement. Using this benchmark, we evaluate 13 leading open-source and commercial LLMs to uncover key limitations. To improve LLM performance on brief writing, we curate the Sci2Pol-Corpus for fine-tuning. We start by linking each cited scientific paper to its corresponding policy document, drawn from 5.6 million policy records. This produces 140,000 candidate pairs. We then employ an LLM-as-a-judge to filter high-quality examples, followed by in-context polishing using three expert-written samples as references. This process yields a final set of 639 new pairs. Finally, we fine-tune three models on Sci2Pol-Corpus: LLaMA-3.1-8B, Gemma-12B, and Gemma-27B. Fine-tuning leads to consistent performance improvements across Sci2Pol-Bench. Notably, after fine-tuning, Gemma-27B surpasses the much larger GPT-4o and DeepSeek-V3 (671B). These demonstrate the effectiveness of our corpus in bridging the gap between science and policy.
Problem

Research questions and friction points this paper is trying to address.

Creating benchmark for scientific-to-policy brief generation
Developing evaluation metric aligned with expert judgement
Fine-tuning LLMs to bridge science-policy communication gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created benchmark dataset for scientific-to-policy generation tasks
Introduced LLM-based evaluation metric aligned with expert judgement
Fine-tuned models using filtered scientific-policy document pairs
🔎 Similar Papers
No similar papers found.
Weimin Wu
Weimin Wu
Ph.D. Candidate in Computer Science, Northwestern University
AI for BiologyML Theory
A
Alexander C. Furnas
Center for Science of Science and Innovation, Northwestern University, Evanston, IL 60208, USA; Kellogg School of Management, Northwestern University, Evanston, IL 60208, USA
E
Eddie Yang
Center for Science of Science and Innovation, Northwestern University, Evanston, IL 60208, USA; Kellogg School of Management, Northwestern University, Evanston, IL 60208, USA
G
Gefei Liu
Department of Computer Science, Brown University, Providence, RI 02912, USA
A
Akhil Pandey Akella
Center for Science of Science and Innovation, Northwestern University, Evanston, IL 60208, USA; Kellogg School of Management, Northwestern University, Evanston, IL 60208, USA
Xuefeng Song
Xuefeng Song
Department of Computer Science, Northwestern University
AI for scienceLarge Language ModelNatural Language Processing
Dashun Wang
Dashun Wang
Kellogg Chair of Technology, Kellogg School of Management, Northwestern University
Science of ScienceInnovationComputational Social ScienceNetwork ScienceComplex Systems
H
Han Liu
Center for Foundation Models and Generative AI, Northwestern University, Evanston, IL 60208, USA; Department of Statistics and Data Science, Northwestern University, Evanston, IL 60208, USA