๐ค AI Summary
This work addresses the optimization bias in RGB-infrared multimodal perception, where disparities in inter-modal information density and feature quality often lead models to over-rely on the dominant modality, thereby hindering effective fusion. To mitigate this issue, the study introduces the Modality Dominance Index (MDI)โthe first metric to quantitatively assess modality dominanceโand proposes the MDACL framework. MDACL dynamically balances cross-modal optimization through Hierarchical Cross-modal Guidance (HCG) and Adversarial Equilibrium Regularization (AER). Evaluated on three RGB-infrared benchmark datasets, the method achieves state-of-the-art performance, significantly alleviating optimization bias and enhancing the robustness and effectiveness of multimodal fusion.
๐ Abstract
RGB-Infrared (RGB-IR) multimodal perception is fundamental to embodied multimedia systems operating in complex physical environments. Although recent cross-modal fusion methods have advanced RGB-IR detection, the optimization dynamics caused by asymmetric modality characteristics remain underexplored. In practice, disparities in information density and feature quality introduce persistent optimization bias, leading training to overemphasize a dominant modality and hindering effective fusion. To quantify this phenomenon, we propose the Modality Dominance Index (MDI), which measures modality dominance by jointly modeling feature entropy and gradient contribution. Based on MDI, we develop a Modality Dominance-Aware Cross-modal Learning (MDACL) framework that regulates cross-modal optimization. MDACL incorporates Hierarchical Cross-modal Guidance (HCG) to enhance feature alignment and Adversarial Equilibrium Regularization (AER) to balance optimization dynamics during fusion. Extensive experiments on three RGB-IR benchmarks demonstrate that MDACL effectively mitigates optimization bias and achieves SOTA performance.