🤖 AI Summary
This work addresses the critical challenge of conflating “established scientific statements” with “research intent” in automated scientific claim identification. We introduce NSF-SciFy, the first large-scale, temporally diverse (1970s–2020s) dataset of 400,000 NSF award abstracts spanning all STEM disciplines, explicitly annotated for this distinction. We formally define and model the binary task of disentangling *claims* (assertions grounded in existing knowledge) from *intentions* (proposed research objectives). Methodologically, we propose an LLM-driven evaluation framework, integrating zero-shot prompting, domain-adapted RoBERTa/DeBERTa fine-tuning, BERTScore-based automatic assessment, and multi-task joint extraction. On the materials science subset (NSF-SciFy-MatSci), our approach achieves +100% and +90% absolute F1 gains for claim and intention extraction, respectively; for technical-to-lay summary generation, BERTScore reaches ≥0.85. This work establishes foundational resources and a principled methodology for scientific claim verification and meta-scientific analysis.
📝 Abstract
We present NSF-SciFy, a large-scale dataset for scientific claim extraction derived from the National Science Foundation (NSF) awards database, comprising over 400K grant abstracts spanning five decades. While previous datasets relied on published literature, we leverage grant abstracts which offer a unique advantage: they capture claims at an earlier stage in the research lifecycle before publication takes effect. We also introduce a new task to distinguish between existing scientific claims and aspirational research intentions in proposals.Using zero-shot prompting with frontier large language models, we jointly extract 114K scientific claims and 145K investigation proposals from 16K grant abstracts in the materials science domain to create a focused subset called NSF-SciFy-MatSci. We use this dataset to evaluate 3 three key tasks: (1) technical to non-technical abstract generation, where models achieve high BERTScore (0.85+ F1); (2) scientific claim extraction, where fine-tuned models outperform base models by 100% relative improvement; and (3) investigation proposal extraction, showing 90%+ improvement with fine-tuning. We introduce novel LLM-based evaluation metrics for robust assessment of claim/proposal extraction quality. As the largest scientific claim dataset to date -- with an estimated 2.8 million claims across all STEM disciplines funded by the NSF -- NSF-SciFy enables new opportunities for claim verification and meta-scientific research. We publicly release all datasets, trained models, and evaluation code to facilitate further research.