Recall, Risk, and Governance in Automated Proposal Screening for Research Funding: Evidence from a National Funding Programme

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of empirical evaluation regarding the error costs and institutional compatibility of AI systems in the initial screening of high-risk research proposals. It introduces a novel assessment framework that explicitly incorporates error asymmetry—particularly the critical impact of irreversible false negatives—and institutional governance requirements. The authors compare a rule-based TF-IDF approach with large language model (LLM) semantic classification in the context of national-level grant proposal screening. Results demonstrate that the TF-IDF method substantially outperforms the LLM, achieving a recall of 78.95% (versus 45.82%) and generating only 68 false negatives (versus 175), thereby significantly reducing the erroneous rejection of high-quality proposals. The findings underscore that transparency and auditability should take precedence over model complexity, offering a new paradigm for institutionally aligned AI-assisted peer review.

Technology Category

Application Category

📝 Abstract
Research funding agencies are increasingly exploring automated tools to support early-stage proposal screening. Recent advances in large language models (LLMs) have generated optimism regarding their use for text-based evaluation, yet their institutional suitability for high-stakes screening decisions remains underexplored. In particular, there is limited empirical evidence on how automated screening systems perform when evaluated against institutional error costs. This study compares two automated approaches for proposal screening against the priorities of a national funding call: A transparent, rule-based method using term frequency-inverse document frequency (TF-IDF) with domain-specific keyword engineering, and a semantic classification approach based on a large language model. Using selection committee decisions as ground truth for 959 proposals, we evaluate performance with particular attention to error structure. The results show that the TF-IDF-based approach outperforms the LLM-based system across standard metrics, achieving substantially higher recall (78.95\% vs 45.82\%) and producing far fewer false negatives (68 vs 175). The LLM-based system excludes more than half of the proposals ultimately selected by the committee. While false positives can be corrected through subsequent peer review, false negatives represent an irrecoverable exclusion from expert evaluation. By foregrounding error asymmetry and institutional context, this study demonstrates that the suitability of automated screening systems depends not on model sophistication alone, but on how their error profiles, transparency, and auditability align with research evaluation practice. These findings suggest that evaluation design and error tolerance should guide the use of AI-assisted screening tools in research funding more broadly.
Problem

Research questions and friction points this paper is trying to address.

automated proposal screening
error asymmetry
research funding
false negatives
institutional suitability
Innovation

Methods, ideas, or system contributions that make the work stand out.

automated proposal screening
error asymmetry
TF-IDF
large language models
research funding governance
🔎 Similar Papers
No similar papers found.
C
Chandan G. Nagarajappa
DST-Centre for Policy Research, Indian Institute of Science, Bengaluru, India - 560012
M
Moumita Koley
DST-Centre for Policy Research, Indian Institute of Science, Bengaluru, India - 560012; Research on Research Institute (RoRI), UK
Avinash Kumar
Avinash Kumar
Research Assistant Soongsil University, Seoul, South Korea
Machine LearningDeep LearningComputer Vision GAN's
R
Rabindra Panigrahy
Department of Science & Technology, Ministry of Science & Technology, India
P
Pramod Kumar Arya
Department of Science & Technology, Ministry of Science & Technology, India