🤖 AI Summary
In multimodal video understanding, models often overfit to dominant modalities while suppressing contributions from weaker ones, degrading generalization. To address this, we propose a feature-level dynamic mixing framework: first, multimodal Mixup (MM) is applied *after* multimodal feature fusion to generate synthetic feature-label pairs; second, a balancing strategy (B-MM) dynamically adjusts mixing weights based on each modality’s real-time contribution, mitigating modality imbalance. This work is the first to apply Mixup in the post-fusion feature space and enables adaptive, contribution-aware modality weighting. Extensive experiments across multiple video understanding benchmarks demonstrate significant improvements in model generalization and robustness—particularly under modality corruption or distribution shift—validating the efficacy of our approach for complex multimodal tasks.
📝 Abstract
Multimodal video understanding plays a crucial role in tasks such as action recognition and emotion classification by combining information from different modalities. However, multimodal models are prone to overfitting strong modalities, which can dominate learning and suppress the contributions of weaker ones. To address this challenge, we first propose Multimodal Mixup (MM), which applies the Mixup strategy at the aggregated multimodal feature level to mitigate overfitting by generating virtual feature-label pairs. While MM effectively improves generalization, it treats all modalities uniformly and does not account for modality imbalance during training. Building on MM, we further introduce Balanced Multimodal Mixup (B-MM), which dynamically adjusts the mixing ratios for each modality based on their relative contributions to the learning objective. Extensive experiments on several datasets demonstrate the effectiveness of our methods in improving generalization and multimodal robustness.