🤖 AI Summary
In supervised learning, human label variation (HLV)—reasonable inter-annotator disagreement arising from subjectivity—is often mischaracterized as label noise; conventional active learning (AL) assumes a single ground truth and thus overlooks HLV’s informative value. Method: We propose an HLV-aware AL paradigm that decomposes observed label variation into signal (HLV) and noise (annotation errors); reformulates the entire AL pipeline—including sample selection, annotator allocation, and label representation—to explicitly model HLV; and introduces large language models (LLMs) into annotator modeling for the first time. Contribution/Results: Experiments demonstrate that our approach significantly improves model robustness and generalization under limited annotation budgets. This work establishes the theoretical foundation for HLV-driven machine learning, challenges the single-ground-truth assumption underlying classical AL, and advances a learning paradigm better aligned with the inherent complexity of real-world annotation processes.
📝 Abstract
Access to high-quality labeled data remains a limiting factor in applied supervised learning. While label variation (LV), i.e., differing labels for the same instance, is common, especially in natural language processing, annotation frameworks often still rest on the assumption of a single ground truth. This overlooks human label variation (HLV), the occurrence of plausible differences in annotations, as an informative signal. Similarly, active learning (AL), a popular approach to optimizing the use of limited annotation budgets in training ML models, often relies on at least one of several simplifying assumptions, which rarely hold in practice when acknowledging HLV. In this paper, we examine foundational assumptions about truth and label nature, highlighting the need to decompose observed LV into signal (e.g., HLV) and noise (e.g., annotation error). We survey how the AL and (H)LV communities have addressed -- or neglected -- these distinctions and propose a conceptual framework for incorporating HLV throughout the AL loop, including instance selection, annotator choice, and label representation. We further discuss the integration of large language models (LLM) as annotators. Our work aims to lay a conceptual foundation for HLV-aware active learning, better reflecting the complexities of real-world annotation.