Instance-wise Supervision-level Optimization in Active Learning

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses label-efficient active learning by jointly optimizing *which samples to annotate* and *at what supervision granularity* (e.g., full vs. weak supervision) under a fixed annotation budget. We propose the first sample-level dynamic supervision-level selection framework, featuring a unified optimization criterion that jointly maximizes value-cost ratio (VCR) and diversity to simultaneously determine sample selection and supervision intensity allocation. Our method integrates active learning, weak supervision, and combinatorial optimization to enable adaptive scheduling of multi-level supervision strategies. Extensive experiments on multi-class classification tasks demonstrate significant improvements over conventional active learning and state-of-the-art hybrid supervision baselines: it reduces average annotation cost by 23.6% at equivalent accuracy, or boosts classification accuracy by 1.8–3.4 percentage points under the same budget. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Active learning (AL) is a label-efficient machine learning paradigm that focuses on selectively annotating high-value instances to maximize learning efficiency. Its effectiveness can be further enhanced by incorporating weak supervision, which uses rough yet cost-effective annotations instead of exact (i.e., full) but expensive annotations. We introduce a novel AL framework, Instance-wise Supervision-Level Optimization (ISO), which not only selects the instances to annotate but also determines their optimal annotation level within a fixed annotation budget. Its optimization criterion leverages the value-to-cost ratio (VCR) of each instance while ensuring diversity among the selected instances. In classification experiments, ISO consistently outperforms traditional AL methods and surpasses a state-of-the-art AL approach that combines full and weak supervision, achieving higher accuracy at a lower overall cost. This code is available at https://github.com/matsuo-shinnosuke/ISOAL.
Problem

Research questions and friction points this paper is trying to address.

Optimizes instance selection and annotation level in active learning.
Enhances learning efficiency using weak supervision within budget constraints.
Achieves higher accuracy at lower cost compared to traditional methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes annotation level per instance
Uses value-to-cost ratio for selection
Ensures diversity in selected instances
🔎 Similar Papers
No similar papers found.