Improving Multimodal Learning Balance and Sufficiency through Data Remixing

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multimodal joint training, modality inertia and modality conflict impede sufficient and balanced learning across modalities. To address this, we propose Data Remixing—a three-stage paradigm comprising modality decoupling, modality-adaptive hard sample mining, and batch-level gradient direction alignment. Without introducing additional inference overhead or extra training data, Data Remixing jointly optimizes unimodal sufficiency and multimodal balance. Crucially, it is the first method to enable end-to-end differentiable, simultaneous improvement of both objectives while mitigating cross-modal interference. Evaluated on CREMAD and Kinetic-Sounds, it achieves absolute accuracy gains of +6.50% and +3.41%, respectively. The approach is architecture-agnostic—compatible with mainstream multimodal models—requiring no architectural modifications or increased computational cost during training or inference.

Technology Category

Application Category

📝 Abstract
Different modalities hold considerable gaps in optimization trajectories, including speeds and paths, which lead to modality laziness and modality clash when jointly training multimodal models, resulting in insufficient and imbalanced multimodal learning. Existing methods focus on enforcing the weak modality by adding modality-specific optimization objectives, aligning their optimization speeds, or decomposing multimodal learning to enhance unimodal learning. These methods fail to achieve both unimodal sufficiency and multimodal balance. In this paper, we, for the first time, address both concerns by proposing multimodal Data Remixing, including decoupling multimodal data and filtering hard samples for each modality to mitigate modality imbalance; and then batch-level reassembling to align the gradient directions and avoid cross-modal interference, thus enhancing unimodal learning sufficiency. Experimental results demonstrate that our method can be seamlessly integrated with existing approaches, improving accuracy by approximately 6.50%$uparrow$ on CREMAD and 3.41%$uparrow$ on Kinetic-Sounds, without training set expansion or additional computational overhead during inference. The source code is available at href{https://github.com/MatthewMaxy/Remix_ICML2025}{Data Remixing}.
Problem

Research questions and friction points this paper is trying to address.

Addressing modality imbalance in multimodal learning
Enhancing unimodal learning sufficiency and balance
Mitigating cross-modal interference through data remixing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupling multimodal data for balance
Filtering hard samples per modality
Batch-level reassembling to align gradients
🔎 Similar Papers
No similar papers found.
Xiaoyu Ma
Xiaoyu Ma
Carnegie Mellon University
Transportation network modelingmachine learningreinforcement learningsimulationoptimization
H
Hao Chen
School of Computer Science and Engineering, Southeast University, Nanjing, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
Yongjian Deng
Yongjian Deng
Beijing University of Technology
Event-based visionCross-modal learningComputational PhotographyGraph-based representation