🤖 AI Summary
Existing AI text detectors rely on fixed global thresholds, ignoring subgroup distributional disparities—such as text length and authorial writing style—leading to elevated false positive rates for short texts and neurotic-style writing, thereby introducing group-level bias. To address this, we propose FairOPT, the first group-adaptive threshold optimization framework for AI text detection. FairOPT partitions inputs into attribute-defined subgroups and jointly optimizes both macro-F1 score and balanced error rate (BER) across groups, supporting plug-and-play integration with multiple detectors. Extensive experiments across four state-of-the-art detectors and three benchmark datasets demonstrate that FairOPT significantly improves overall F1 performance while reducing inter-subgroup BER disparity by 37%, markedly enhancing both detection robustness and group fairness.
📝 Abstract
The advancement of large language models (LLMs) has made it difficult to differentiate human-written text from AI-generated text. Several AI-text detectors have been developed in response, which typically utilize a fixed global threshold (e.g., { heta} = 0.5) to classify machine-generated text. However, we find that one universal threshold can fail to account for subgroup-specific distributional variations. For example, when using a fixed threshold, detectors make more false positive errors on shorter human-written text than longer, and more positive classifications on neurotic writing styles than open among long text. These discrepancies can lead to misclassification that disproportionately affects certain groups. We address this critical limitation by introducing FairOPT, an algorithm for group-specific threshold optimization in AI-generated content classifiers. Our approach partitions data into subgroups based on attributes (e.g., text length and writing style) and learns decision thresholds for each group, which enables careful balancing of performance and fairness metrics within each subgroup. In experiments with four AI text classifiers on three datasets, FairOPT enhances overall F1 score and decreases balanced error rate (BER) discrepancy across subgroups. Our framework paves the way for more robust and fair classification criteria in AI-generated output detection.