🤖 AI Summary
Multimodal contrastive learning often captures only redundant, shared information across modalities, failing to model modality-unique and synergistic interactions. To address this, we propose CoMM, a framework that abandons explicit cross-modal feature alignment and instead maximizes mutual information among augmented representations within a unified multimodal embedding space—enabling end-to-end co-modeling. For the first time, we rigorously disentangle multimodal information into redundant, unique, and synergistic components from an information-theoretic perspective, and theoretically prove that mutual information maximization inherently balances these three components. The method is fully differentiable and requires no paired multimodal supervision. Controlled ablation studies validate the accuracy of our information disentanglement, and CoMM achieves state-of-the-art performance on seven real-world multimodal benchmarks.
📝 Abstract
Humans perceive the world through multisensory integration, blending the information of different modalities to adapt their behavior. Contrastive learning offers an appealing solution for multimodal self-supervised learning. Indeed, by considering each modality as a different view of the same entity, it learns to align features of different modalities in a shared representation space. However, this approach is intrinsically limited as it only learns shared or redundant information between modalities, while multimodal interactions can arise in other ways. In this work, we introduce CoMM, a Contrastive MultiModal learning strategy that enables the communication between modalities in a single multimodal space. Instead of imposing cross- or intra- modality constraints, we propose to align multimodal representations by maximizing the mutual information between augmented versions of these multimodal features. Our theoretical analysis shows that shared, synergistic and unique terms of information naturally emerge from this formulation, allowing us to estimate multimodal interactions beyond redundancy. We test CoMM both in a controlled and in a series of real-world settings: in the former, we demonstrate that CoMM effectively captures redundant, unique and synergistic information between modalities. In the latter, CoMM learns complex multimodal interactions and achieves state-of-the-art results on the seven multimodal benchmarks. Code is available at https://github.com/Duplums/CoMM