FSSUAVL: A Discriminative Framework using Vision Models for Federated Self-Supervised Audio and Image Understanding

📅 2025-04-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) faces practical challenges in multi-modal settings—particularly with audio and visual data—that are inherently decentralized, heterogeneous, and typically unpaired across clients. Method: This paper proposes FSSUAVL, a single-model framework for federated self-supervised unified audio-visual learning. It is the first to eliminate explicit modality alignment and generative auxiliary modules in FL, instead leveraging self-supervised contrastive learning to jointly learn discriminative cross-modal representations within a unified embedding space. FSSUAVL integrates ViT and CNN backbones, supports both paired and unpaired data scenarios, and is naturally extensible to multi-modal settings. Contribution/Results: Extensive experiments demonstrate that FSSUAVL significantly outperforms unimodal baselines on diverse downstream image and audio tasks. It effectively incorporates auxiliary cross-modal information to improve recognition accuracy, validating its architecture-agnostic generalization and efficacy for federated multi-modal learning.

Technology Category

Application Category

📝 Abstract
Recent studies have demonstrated that vision models can effectively learn multimodal audio-image representations when paired. However, the challenge of enabling deep models to learn representations from unpaired modalities remains unresolved. This issue is especially pertinent in scenarios like Federated Learning (FL), where data is often decentralized, heterogeneous, and lacks a reliable guarantee of paired data. Previous attempts tackled this issue through the use of auxiliary pretrained encoders or generative models on local clients, which invariably raise computational cost with increasing number modalities. Unlike these approaches, in this paper, we aim to address the task of unpaired audio and image recognition using exttt{FSSUAVL}, a single deep model pretrained in FL with self-supervised contrastive learning (SSL). Instead of aligning the audio and image modalities, exttt{FSSUAVL} jointly discriminates them by projecting them into a common embedding space using contrastive SSL. This extends the utility of exttt{FSSUAVL} to paired and unpaired audio and image recognition tasks. Our experiments with CNN and ViT demonstrate that exttt{FSSUAVL} significantly improves performance across various image- and audio-based downstream tasks compared to using separate deep models for each modality. Additionally, exttt{FSSUAVL}'s capacity to learn multimodal feature representations allows for integrating auxiliary information, if available, to enhance recognition accuracy.
Problem

Research questions and friction points this paper is trying to address.

Learning multimodal representations from unpaired audio-image data
Addressing decentralized heterogeneous data in Federated Learning
Reducing computational cost in multimodal recognition tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated self-supervised contrastive learning framework
Single model for unpaired audio-image recognition
Common embedding space via contrastive SSL
🔎 Similar Papers
No similar papers found.