🤖 AI Summary
The strong directed acyclic graph (DAG) assumption in conventional causal modeling is often violated in real-world scenarios, limiting applicability. Method: This paper proposes an implicit partial causal modeling framework that dispenses with explicit DAG constraints. It introduces two latent variables coupled via undirected edges to characterize cross-modal knowledge transfer, and—crucially—establishes the first identifiable correspondence between multimodal contrastive learning representations and these latent coupled variables. Theoretically, it transcends traditional directed causal graphs, offering a novel paradigm for implicit causal structure modeling; methodologically, it integrates latent variable modeling, statistical identifiability analysis, and pre-trained model decoupling techniques (e.g., CLIP). Results: Empirical evaluation on synthetic data confirms robustness to misspecified assumptions; on CLIP, it achieves effective representation decoupling, yielding substantial improvements in few-shot learning and cross-domain generalization performance.
📝 Abstract
Directed acyclic graphs (DAGs) are fundamental graph structures in causal modeling, but identifying the desired DAG from observational data often requires strong assumptions that may not hold in real-world scenarios, especially for latent causal models and complex multimodal data. This raises the question of whether we can relax or bypass the DAG assumption while maintaining practical utility. In this work, we propose a novel latent partial causal model for multimodal data, featuring two latent coupled variables, connected by an undirected edge, to represent the transfer of knowledge across modalities. Under specific statistical assumptions, we establish an identifiability result, demonstrating that representations learned by multimodal contrastive learning correspond to the latent coupled variables up to a trivial transformation. This result deepens our understanding of the why multimodal contrastive learning works, highlights its potential for disentanglement, and expands the utility of pre-trained models like CLIP. Synthetic experiments confirm the robustness of our findings, even when the assumptions are partially violated. Most importantly, experiments on a pre-trained CLIP model embodies disentangled representations, enabling few-shot learning and improving domain generalization across diverse real-world datasets. Together, these contributions push the boundaries of multimodal contrastive learning, both theoretically and, crucially, in practical applications.