🤖 AI Summary
Crowdsourced annotations frequently exhibit annotator disagreement, conventionally treated as noise to be discarded. However, systematic disagreement—e.g., arising from demographic differences—is often a meaningful signal rather than mere noise.
Method: We propose the first Bayesian annotation model that explicitly incorporates annotator demographic attributes, jointly modeling annotation competence, group-level preferences, and random error. This enables interpretable separation of systematic disagreement from stochastic noise. Our approach integrates synthetic-data validation with subgroup disparity analysis to support fine-grained annotation purification.
Contribution/Results: Experiments on both synthetic and real-world datasets demonstrate that our method significantly outperforms conventional aggregation strategies (e.g., majority voting). Downstream NLP models trained on purified labels achieve substantial performance gains, empirically validating that distinguishing disagreement types is critical for improving training data quality.
📝 Abstract
NLP models often rely on human-labeled data for training and evaluation. Many approaches crowdsource this data from a large number of annotators with varying skills, backgrounds, and motivations, resulting in conflicting annotations. These conflicts have traditionally been resolved by aggregation methods that assume disagreements are errors. Recent work has argued that for many tasks annotators may have genuine disagreements and that variation should be treated as signal rather than noise. However, few models separate signal and noise in annotator disagreement. In this work, we introduce NUTMEG, a new Bayesian model that incorporates information about annotator backgrounds to remove noisy annotations from human-labeled training data while preserving systematic disagreements. Using synthetic data, we show that NUTMEG is more effective at recovering ground-truth from annotations with systematic disagreement than traditional aggregation methods. We provide further analysis characterizing how differences in subpopulation sizes, rates of disagreement, and rates of spam affect the performance of our model. Finally, we demonstrate that downstream models trained on NUTMEG-aggregated data significantly outperform models trained on data from traditionally aggregation methods. Our results highlight the importance of accounting for both annotator competence and systematic disagreements when training on human-labeled data.