๐ค AI Summary
This study addresses the emerging threat posed by AI-generated fake scientific tables to academic integrity, a challenge inadequately tackled by existing detection methods. The work presents the first systematic investigation of this issue, introducing FabTabโthe first benchmark dataset comprising both AI-generated and human-authored scientific tables. The authors propose a novel detection approach centered on the perplexity discrepancy between a tableโs structural skeleton and its numerical content. By integrating structural parsing, language model perplexity, and multi-perspective feature engineering within a Random Forest classifier, the method achieves strong performance, yielding AUROC scores of 0.987 and 0.883 on in-domain and out-of-domain evaluations, respectively, substantially outperforming current state-of-the-art techniques.
๐ Abstract
AI-generated fabricated scientific manuscripts raise growing concerns with large-scale breaches of academic integrity. In this work, we present the first systematic study on detecting AI-generated fabricated scientific tables in empirical NLP papers, as information in tables serve as critical evidence for claims. We construct FabTab, the first benchmark dataset of fabricated manuscripts with tables, comprising 1,173 AI-generated papers and 1,215 human-authored ones in empirical NLP. Through a comprehensive analysis, we identify systematic differences between fabricated and real tables and operationalize them into a set of discriminative features within the TAB-AUDIT framework. The key feature, within-table mismatch, captures the perplexity gap between a table's skeleton and its numerical content. Experimental results show that RandomForest built on these features significantly outperform prior state-of-the-art methods, achieving 0.987 AUROC in-domain and 0.883 AUROC out-of-domain. Our findings highlight experimental tables as a critical forensic signal for detecting AI-generated scientific fraud and provide a new benchmark for future research.