Balancing Quality and Variation: Spam Filtering Distorts Data Label Distributions

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the “quality–diversity trade-off” in subjective annotation tasks: conventional spam annotator filtering methods treat label variation as noise, erroneously removing reliable annotators holding minority opinions and thereby distorting the true opinion distribution. The authors argue that genuine spam annotators tend to exhibit *fixed-response behavior*—repeatedly selecting identical labels—not random guessing; thus, spam behavior must be redefined accordingly. They design multiple heuristic filtering strategies and systematically evaluate them on synthetically noisy data. Results show that annotation bias remains acceptable only when ≤5% of annotators are removed; most existing methods fail to detect fixed-response spammers and frequently misclassify non-random yet reliable dissenting annotators as noise. The core contribution is the insight that label diversity itself constitutes meaningful signal—not noise—in subjective tasks, and the proposal of *response fixity*, rather than inter-annotator agreement, as a principled criterion for spam detection.

Technology Category

Application Category

📝 Abstract
For machine learning datasets to accurately represent diverse opinions in a population, they must preserve variation in data labels while filtering out spam or low-quality responses. How can we balance annotator reliability and representation? We empirically evaluate how a range of heuristics for annotator filtering affect the preservation of variation on subjective tasks. We find that these methods, designed for contexts in which variation from a single ground-truth label is considered noise, often remove annotators who disagree instead of spam annotators, introducing suboptimal tradeoffs between accuracy and label diversity. We find that conservative settings for annotator removal (<5%) are best, after which all tested methods increase the mean absolute error from the true average label. We analyze performance on synthetic spam to observe that these methods often assume spam annotators are less random than real spammers tend to be: most spammers are distributionally indistinguishable from real annotators, and the minority that are distinguishable tend to give fixed answers, not random ones. Thus, tasks requiring the preservation of variation reverse the intuition of existing spam filtering methods: spammers tend to be less random than non-spammers, so metrics that assume variation is spam fare worse. These results highlight the need for spam removal methods that account for label diversity.
Problem

Research questions and friction points this paper is trying to address.

Balancing annotator reliability and label representation diversity
Evaluating filtering heuristics' impact on subjective task variation preservation
Addressing spam removal methods that mistakenly remove legitimate disagreement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conservative annotator removal settings
Metrics assuming variation is spam
Spam removal preserving label diversity
🔎 Similar Papers
No similar papers found.