🤖 AI Summary
Existing image classification benchmarks (e.g., CIFAR, ImageNet) suffer from both label noise and—critically overlooked due to multi-class co-occurrence—*missing labels*, leading to biased model evaluation. This work introduces REVEAL, a systematic test-set refinement framework that, for the first time, integrates missing-label detection as a core dataset quality assessment task. Methodologically, REVEAL leverages ensemble predictions from multiple vision-language models (LLaVA, BLIP, Janus, Qwen), followed by confidence-weighted aggregation and consensus-based filtering to generate interpretable soft labels. These are then refined via human-in-the-loop annotation using Cleanlab, Docta, and MTurk. Evaluated across six benchmarks, REVEAL significantly improves label quality: soft labels achieve strong agreement with human judgments (Cohen’s κ > 0.92), enabling fairer and more reliable model evaluation.
📝 Abstract
Image classification benchmark datasets such as CIFAR, MNIST, and ImageNet serve as critical tools for model evaluation. However, despite the cleaning efforts, these datasets still suffer from pervasive noisy labels and often contain missing labels due to the co-existing image pattern where multiple classes appear in an image sample. This results in misleading model comparisons and unfair evaluations. Existing label cleaning methods focus primarily on noisy labels, but the issue of missing labels remains largely overlooked. Motivated by these challenges, we present a comprehensive framework named REVEAL, integrating state-of-the-art pre-trained vision-language models (e.g., LLaVA, BLIP, Janus, Qwen) with advanced machine/human label curation methods (e.g., Docta, Cleanlab, MTurk), to systematically address both noisy labels and missing label detection in widely-used image classification test sets. REVEAL detects potential noisy labels and omissions, aggregates predictions from various methods, and refines label accuracy through confidence-informed predictions and consensus-based filtering. Additionally, we provide a thorough analysis of state-of-the-art vision-language models and pre-trained image classifiers, highlighting their strengths and limitations within the context of dataset renovation by revealing 10 observations. Our method effectively reveals missing labels from public datasets and provides soft-labeled results with likelihoods. Through human verifications, REVEAL significantly improves the quality of 6 benchmark test sets, highly aligning to human judgments and enabling more accurate and meaningful comparisons in image classification.