Mixup Helps Understanding Multimodal Video Better

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multimodal video understanding, models often overfit to dominant modalities while suppressing contributions from weaker ones, degrading generalization. To address this, we propose a feature-level dynamic mixing framework: first, multimodal Mixup (MM) is applied *after* multimodal feature fusion to generate synthetic feature-label pairs; second, a balancing strategy (B-MM) dynamically adjusts mixing weights based on each modality’s real-time contribution, mitigating modality imbalance. This work is the first to apply Mixup in the post-fusion feature space and enables adaptive, contribution-aware modality weighting. Extensive experiments across multiple video understanding benchmarks demonstrate significant improvements in model generalization and robustness—particularly under modality corruption or distribution shift—validating the efficacy of our approach for complex multimodal tasks.

Technology Category

Application Category

📝 Abstract
Multimodal video understanding plays a crucial role in tasks such as action recognition and emotion classification by combining information from different modalities. However, multimodal models are prone to overfitting strong modalities, which can dominate learning and suppress the contributions of weaker ones. To address this challenge, we first propose Multimodal Mixup (MM), which applies the Mixup strategy at the aggregated multimodal feature level to mitigate overfitting by generating virtual feature-label pairs. While MM effectively improves generalization, it treats all modalities uniformly and does not account for modality imbalance during training. Building on MM, we further introduce Balanced Multimodal Mixup (B-MM), which dynamically adjusts the mixing ratios for each modality based on their relative contributions to the learning objective. Extensive experiments on several datasets demonstrate the effectiveness of our methods in improving generalization and multimodal robustness.
Problem

Research questions and friction points this paper is trying to address.

Addresses overfitting to dominant modalities in video understanding
Proposes balanced mixing to handle modality imbalance dynamically
Improves generalization and robustness in multimodal learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixup applied to multimodal feature level
Balanced Mixup adjusts modality mixing ratios
Dynamic ratio adjustment based on modality contributions
🔎 Similar Papers
No similar papers found.
Xiaoyu Ma
Xiaoyu Ma
Carnegie Mellon University
Transportation network modelingmachine learningreinforcement learningsimulationoptimization
D
Ding Ding
School of Computer Science and Engineering, Southeast University, Nanjing, China
H
Hao Chen
School of Computer Science and Engineering, Southeast University, Nanjing, China