Federated learning for unpaired multimodal data through a homogeneous transformer model

📅 2026-01-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving cross-modal semantic alignment in federated learning when multimodal data are decentralized, unpaired, and privacy-sensitive. The authors propose a novel federated multimodal learning framework that operates without requiring sample-wise modality pairing. By leveraging shared public anchors and Gram matrix–based centered kernel alignment (CKA), the method aligns semantic directions across heterogeneous modalities without transmitting private data. It further disentangles modality-specific magnitudes from shared semantic information and integrates subspace-stable fine-tuning with an uncertainty-aware, accuracy-weighted aggregation strategy. This approach successfully trains a unified global multimodal Transformer model, enabling high-fidelity semantic alignment and efficient knowledge aggregation over fully unpaired, distributed, and private data, thereby establishing a new paradigm for building multimodal foundation models in privacy-constrained settings.

Technology Category

Application Category

📝 Abstract
Training of multimodal foundation models is currently restricted to centralized data centers containing massive, aligned datasets (e.g., image-text pairs). However, in realistic federated environments, data is often unpaired and fragmented across disjoint nodes; one node may hold sensor data, while another holds textual logs. These datasets are strictly private and share no common samples. Current federated learning (FL) methods fail in this regime, as they assume local clients possess aligned pairs or require sharing raw feature embeddings, which violates data sovereignty. We propose a novel framework to train a global multimodal transformer across decentralized nodes with disjoint modalities. We introduce a small public anchor set to align disjoint private manifolds. Using Gram matrices calculated from these public anchors, we enforce semantic alignment across modalities through centered kernel alignment without ever transmitting private samples, offering a mathematically superior privacy guarantee compared to prototype sharing. Further, we introduce a subspace-stabilized fine-tuning method to handle FL with huge transformer models. We strictly decouple domain-specific magnitude shifts from semantic direction, ensuring that nodes with varying sensor characteristics align geometrically to the global consensus. Lastly, we propose precision weighted averaging, where efficiently obtained uncertainty estimates are used to downweight uncertain nodes. This paper establishes the mathematical backbone for federated unpaired foundation models, enabling a global model to learn a unified representation of the world from fragmented, disjoint, and private data silos without requiring centralized storage or paired samples.
Problem

Research questions and friction points this paper is trying to address.

federated learning
unpaired multimodal data
data sovereignty
disjoint modalities
foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

federated learning
unpaired multimodal data
homogeneous transformer
centered kernel alignment
privacy-preserving representation
🔎 Similar Papers
No similar papers found.