Label What Matters: Modality-Balanced and Difficulty-Aware Multimodal Active Learning

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal active learning approaches often overlook the dynamic shifts in modality importance and variations in sample difficulty, leading to suboptimal annotation efficiency and imbalanced modality utilization. To address these limitations, this work proposes RL-MBA, a framework that formulates sample selection as a Markov decision process. It integrates an Adaptive Modality Contribution Balancing (AMCB) mechanism with an Evidence Fusion-based Difficulty-Aware (EFDA) strategy, leveraging reinforcement learning to dynamically adjust modality weights and respond to sample uncertainty. Experiments on Food101, KineticsSound, and VGGSound demonstrate that RL-MBA significantly improves classification accuracy and modality fairness under limited annotation budgets, outperforming current active learning methods.

Technology Category

Application Category

📝 Abstract
Multimodal learning integrates complementary information from different modalities such as image, text, and audio to improve model performance, but its success relies on large-scale labeled data, which is costly to obtain. Active learning (AL) mitigates this challenge by selectively annotating informative samples. In multimodal settings, many approaches implicitly assume that modality importance is stable across rounds and keep selection rules fixed at the fusion stage, which leaves them insensitive to the dynamic nature of multimodal learning, where the relative value of modalities and the difficulty of instances shift as training proceeds. To address this issue, we propose RL-MBA, a reinforcement-learning framework for modality-balanced, difficulty-aware multimodal active learning. RL-MBA models sample selection as a Markov Decision Process, where the policy adapts to modality contributions, uncertainty, and diversity, and the reward encourages accuracy gains and balance. Two key components drive this adaptability: (1) Adaptive Modality Contribution Balancing (AMCB), which dynamically adjusts modality weights via reinforcement feedback, and (2) Evidential Fusion for DifficultyAware Policy Adjustment (EFDA), which estimates sample difficulty via uncertainty-based evidential fusion to prioritize informative samples. Experiments on Food101, KineticsSound, and VGGSound demonstrate that RL-MBA consistently outperforms strong baselines, improving both classification accuracy and modality fairness under limited labeling budgets.
Problem

Research questions and friction points this paper is trying to address.

multimodal active learning
modality importance
sample difficulty
dynamic adaptation
label efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Active Learning
Reinforcement Learning
Modality Balancing
Difficulty-Aware Selection
Evidential Fusion
🔎 Similar Papers
No similar papers found.
Y
Yuqiao Zeng
Key Laboratory of Big Data and Artificial Intelligence in Transportation, Ministry of Education; State Key Laboratory of Advanced Rail Autonomous Operation; School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
Xu Wang
Xu Wang
Hebei University of Technology
NLPKnowledge Graph
T
Tengfei Liang
Key Laboratory of Big Data and Artificial Intelligence in Transportation, Ministry of Education; State Key Laboratory of Advanced Rail Autonomous Operation; School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
Y
Yiqing Hao
Key Laboratory of Big Data and Artificial Intelligence in Transportation, Ministry of Education; State Key Laboratory of Advanced Rail Autonomous Operation; School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
Yi Jin
Yi Jin
Beijing Jiaotong University
computer vision,machine learning
Hui Yu
Hui Yu
Professor of Visual and Cognitive Computing, University of Glasgow
Visual ComputingCognitive ComputingSocial RobotParallel Intelligence