🤖 AI Summary
Scientific papers impede public comprehension and knowledge dissemination due to dense terminology and syntactically complex structures. To address this, we propose a reinforcement learning–based approach for popularizing academic abstracts, featuring a novel dual-granularity accessibility reward mechanism—operating at both the lexical (term simplification) and syntactic (sentence clarity) levels—and incorporating factual consistency constraints. Compared with supervised fine-tuning and conventional readability-metric–guided methods, our approach achieves superior readability improvements across approximately six U.S. grade levels (e.g., from graduate-level to high-school-level readability), with an average gain of 90%. Experiments demonstrate that rewritten abstracts substantially enhance abstract readability while preserving factual accuracy and linguistic quality. Our method establishes a scalable, quantitatively evaluable paradigm for democratizing scientific communication.
📝 Abstract
A vast amount of scholarly work is published daily, yet much of it remains inaccessible to the general public due to dense jargon and complex language. To address this challenge in science communication, we introduce a reinforcement learning framework that fine-tunes a language model to rewrite scholarly abstracts into more comprehensible versions. Guided by a carefully balanced combination of word- and sentence-level accessibility rewards, our language model effectively substitutes technical terms with more accessible alternatives, a task which models supervised fine-tuned or guided by conventional readability measures struggle to accomplish. Our best model adjusts the readability level of scholarly abstracts by approximately six U.S. grade levels -- in other words, from a postgraduate to a high school level. This translates to roughly a 90% relative boost over the supervised fine-tuning baseline, all while maintaining factual accuracy and high-quality language. An in-depth analysis of our approach shows that balanced rewards lead to systematic modifications in the base model, likely contributing to smoother optimization and superior performance. We envision this work as a step toward bridging the gap between scholarly research and the general public, particularly younger readers and those without a college degree.