🤖 AI Summary
This study addresses the lack of empirical evaluation regarding the error costs and institutional compatibility of AI systems in the initial screening of high-risk research proposals. It introduces a novel assessment framework that explicitly incorporates error asymmetry—particularly the critical impact of irreversible false negatives—and institutional governance requirements. The authors compare a rule-based TF-IDF approach with large language model (LLM) semantic classification in the context of national-level grant proposal screening. Results demonstrate that the TF-IDF method substantially outperforms the LLM, achieving a recall of 78.95% (versus 45.82%) and generating only 68 false negatives (versus 175), thereby significantly reducing the erroneous rejection of high-quality proposals. The findings underscore that transparency and auditability should take precedence over model complexity, offering a new paradigm for institutionally aligned AI-assisted peer review.
📝 Abstract
Research funding agencies are increasingly exploring automated tools to support early-stage proposal screening. Recent advances in large language models (LLMs) have generated optimism regarding their use for text-based evaluation, yet their institutional suitability for high-stakes screening decisions remains underexplored. In particular, there is limited empirical evidence on how automated screening systems perform when evaluated against institutional error costs. This study compares two automated approaches for proposal screening against the priorities of a national funding call: A transparent, rule-based method using term frequency-inverse document frequency (TF-IDF) with domain-specific keyword engineering, and a semantic classification approach based on a large language model. Using selection committee decisions as ground truth for 959 proposals, we evaluate performance with particular attention to error structure. The results show that the TF-IDF-based approach outperforms the LLM-based system across standard metrics, achieving substantially higher recall (78.95\% vs 45.82\%) and producing far fewer false negatives (68 vs 175). The LLM-based system excludes more than half of the proposals ultimately selected by the committee. While false positives can be corrected through subsequent peer review, false negatives represent an irrecoverable exclusion from expert evaluation. By foregrounding error asymmetry and institutional context, this study demonstrates that the suitability of automated screening systems depends not on model sophistication alone, but on how their error profiles, transparency, and auditability align with research evaluation practice. These findings suggest that evaluation design and error tolerance should guide the use of AI-assisted screening tools in research funding more broadly.