🤖 AI Summary
This work addresses spurious correlations and modality conflicts in multimodal representation learning arising from causal incompleteness. We propose a unified framework jointly enforcing causal sufficiency and necessity, and formally define and model the “Causally Complete Cause” (C³). Theoretically, we establish identifiability of C³ by relaxing strong assumptions such as exogeneity and monotonicity. Methodologically, we design a dual-branch counterfactual network to enable estimable C³ risk and introduce a plug-and-play C³ regularization module. Evaluated across multiple cross-modal benchmarks, our approach consistently enhances model robustness and generalization, significantly reducing prediction bias. Empirical results demonstrate that causally complete representations yield substantial gains for downstream tasks.
📝 Abstract
Multi-Modal Learning (MML) aims to learn effective representations across modalities for accurate predictions. Existing methods typically focus on modality consistency and specificity to learn effective representations. However, from a causal perspective, they may lead to representations that contain insufficient and unnecessary information. To address this, we propose that effective MML representations should be causally sufficient and necessary. Considering practical issues like spurious correlations and modality conflicts, we relax the exogeneity and monotonicity assumptions prevalent in prior works and explore the concepts specific to MML, i.e., Causal Complete Cause ((C^3)). We begin by defining (C^3), which quantifies the probability of representations being causally sufficient and necessary. We then discuss the identifiability of (C^3) and introduce an instrumental variable to support identifying (C^3) with non-exogeneity and non-monotonicity. Building on this, we conduct the $C^3$ measurement, i.e., (C^3) risk. We propose a twin network to estimate it through (i) the real-world branch: utilizing the instrumental variable for sufficiency, and (ii) the hypothetical-world branch: applying gradient-based counterfactual modeling for necessity. Theoretical analyses confirm its reliability. Based on these results, we propose $C^3$ Regularization, a plug-and-play method that enforces the causal completeness of the learned representations by minimizing (C^3) risk. Extensive experiments demonstrate its effectiveness.