Decomposition Sampling for Efficient Region Annotations in Active Learning

📅 2025-12-08
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost, spurious region selection, and overreliance on uncertainty estimation in region-level active learning for dense prediction tasks, this paper proposes DECOMP (Decomposed Sampling). DECOMP first disentangles input images into class-specific components using pseudo-labels, then performs confidence-weighted sampling of representative regions within each class—jointly optimizing inter-class decomposition and intra-class reliability guidance. By reducing dependence on uncertainty estimation, DECOMP improves coverage balance across minority classes and enhances annotation efficiency. Extensive experiments on ROI classification and 2D/3D medical image segmentation demonstrate that DECOMP consistently outperforms state-of-the-art baselines. Under constrained annotation budgets, it significantly improves sampling quality for minority-class regions and boosts model generalization performance.

Technology Category

Application Category

📝 Abstract
Active learning improves annotation efficiency by selecting the most informative samples for annotation and model training. While most prior work has focused on selecting informative images for classification tasks, we investigate the more challenging setting of dense prediction, where annotations are more costly and time-intensive, especially in medical imaging. Region-level annotation has been shown to be more efficient than image-level annotation for these tasks. However, existing methods for representative annotation region selection suffer from high computational and memory costs, irrelevant region choices, and heavy reliance on uncertainty sampling. We propose decomposition sampling (DECOMP), a new active learning sampling strategy that addresses these limitations. It enhances annotation diversity by decomposing images into class-specific components using pseudo-labels and sampling regions from each class. Class-wise predictive confidence further guides the sampling process, ensuring that difficult classes receive additional annotations. Across ROI classification, 2-D segmentation, and 3-D segmentation, DECOMP consistently surpasses baseline methods by better sampling minority-class regions and boosting performance on these challenging classes. Code is in https://github.com/JingnaQiu/DECOMP.git.
Problem

Research questions and friction points this paper is trying to address.

Improves dense prediction annotation efficiency in medical imaging
Addresses high computational costs in region-level annotation selection
Enhances minority-class region sampling for better model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes images into class-specific components using pseudo-labels
Samples regions from each class to enhance annotation diversity
Uses class-wise predictive confidence to prioritize difficult classes
🔎 Similar Papers
No similar papers found.