Revisiting Active Learning under (Human) Label Variation

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In supervised learning, human label variation (HLV)—reasonable inter-annotator disagreement arising from subjectivity—is often mischaracterized as label noise; conventional active learning (AL) assumes a single ground truth and thus overlooks HLV’s informative value. Method: We propose an HLV-aware AL paradigm that decomposes observed label variation into signal (HLV) and noise (annotation errors); reformulates the entire AL pipeline—including sample selection, annotator allocation, and label representation—to explicitly model HLV; and introduces large language models (LLMs) into annotator modeling for the first time. Contribution/Results: Experiments demonstrate that our approach significantly improves model robustness and generalization under limited annotation budgets. This work establishes the theoretical foundation for HLV-driven machine learning, challenges the single-ground-truth assumption underlying classical AL, and advances a learning paradigm better aligned with the inherent complexity of real-world annotation processes.

Technology Category

Application Category

📝 Abstract
Access to high-quality labeled data remains a limiting factor in applied supervised learning. While label variation (LV), i.e., differing labels for the same instance, is common, especially in natural language processing, annotation frameworks often still rest on the assumption of a single ground truth. This overlooks human label variation (HLV), the occurrence of plausible differences in annotations, as an informative signal. Similarly, active learning (AL), a popular approach to optimizing the use of limited annotation budgets in training ML models, often relies on at least one of several simplifying assumptions, which rarely hold in practice when acknowledging HLV. In this paper, we examine foundational assumptions about truth and label nature, highlighting the need to decompose observed LV into signal (e.g., HLV) and noise (e.g., annotation error). We survey how the AL and (H)LV communities have addressed -- or neglected -- these distinctions and propose a conceptual framework for incorporating HLV throughout the AL loop, including instance selection, annotator choice, and label representation. We further discuss the integration of large language models (LLM) as annotators. Our work aims to lay a conceptual foundation for HLV-aware active learning, better reflecting the complexities of real-world annotation.
Problem

Research questions and friction points this paper is trying to address.

Addressing human label variation in active learning
Decomposing label variation into signal and noise
Integrating large language models as annotators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates human label variation in active learning
Decomposes label variation into signal and noise
Integrates large language models as annotators
🔎 Similar Papers
No similar papers found.
C
Cornelia Gruber
LMU Munich, Department of Statistics, Germany; Munich Center for Machine Learning (MCML), Germany
H
Helen Alber
LMU Munich, Department of Statistics, Germany; Munich Center for Machine Learning (MCML), Germany
Bernd Bischl
Bernd Bischl
Chair of Statistical Learning and Data Science, LMU Munich
Machine LearningStatisticsData ScienceStatistical LearningScientific Software
G
Göran Kauermann
LMU Munich, Department of Statistics, Germany; Munich Center for Machine Learning (MCML), Germany
Barbara Plank
Barbara Plank
Professor, LMU Munich, Visiting Prof ITU Copenhagen
Natural Language ProcessingComputational LinguisticsMachine LearningTransfer Learning
Matthias Aßenmacher
Matthias Aßenmacher
Ludwig-Maximilians-Universität München
Natural Language ProcessingStatisticsMachine Learning