🤖 AI Summary
Foundational models often generate hallucinated text, and existing uncertainty quantification (UQ) methods lack statistical guarantees on false discovery rate (FDR) control in selective prediction. To address this, we propose COIN—a novel framework that introduces *provably valid* FDR control to generative question answering for the first time. COIN integrates split conformal prediction (SCP) with high-probability error upper bounds to construct a rigorous uncertainty filtering mechanism, ensuring strict adherence to user-specified FDR constraints. It further supports diverse UQ strategies and bound constructions via empirical error estimation and Clopper–Pearson confidence intervals. Experiments across multimodal and general text generation tasks demonstrate that COIN significantly improves effective sample retention and prediction efficiency, while exhibiting strong robustness, high statistical power, and reliable risk control—even under limited calibration data.
📝 Abstract
Uncertainty quantification (UQ) for foundation models is essential to identify and mitigate potential hallucinations in automatically generated text. However, heuristic UQ approaches lack formal guarantees for key metrics such as the false discovery rate (FDR) in selective prediction. Previous work adopts the split conformal prediction (SCP) framework to ensure desired coverage of admissible answers by constructing prediction sets, but these sets often contain incorrect candidates, limiting their practical utility. To address this, we propose COIN, an uncertainty-guarding selection framework that calibrates statistically valid thresholds to filter a single generated answer per question under user-specified FDR constraints. COIN estimates the empirical error rate on a calibration set and applies confidence interval methods such as Clopper-Pearson to establish a high-probability upper bound on the true error rate (i.e., FDR). This enables the selection of the largest uncertainty threshold that ensures FDR control on test data while significantly increasing sample retention. We demonstrate COIN's robustness in risk control, strong test-time power in retaining admissible answers, and predictive efficiency under limited calibration data across both general and multimodal text generation tasks. Furthermore, we show that employing alternative upper bound constructions and UQ strategies can further boost COIN's power performance, which underscores its extensibility and adaptability to diverse application scenarios.