Revisiting Cross-Modal Knowledge Distillation: A Disentanglement Approach for RGBD Semantic Segmentation

πŸ“… 2025-05-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In RGB-D semantic segmentation, a practical challenge arises from modality mismatch between multimodal training and unimodal inference. Existing cross-modal knowledge distillation (CMKD) methods rely on multimodal teacher models, limiting generalization and deployment flexibility. To address this, we propose CroDiNo-KDβ€”the first CMKD framework that eliminates the need for a multimodal teacher. Departing from conventional teacher-student paradigms, CroDiNo-KD introduces a novel unimodal collaborative training mechanism integrating decoupled representation learning, contrastive learning, decoupled data augmentation, and cross-modal feature alignment. Evaluated on three cross-domain RGB-D benchmarks, CroDiNo-KD consistently outperforms state-of-the-art CMKD approaches, demonstrating superior effectiveness, robustness, and strong generalization across diverse domains and modalities.

Technology Category

Application Category

πŸ“ Abstract
Multi-modal RGB and Depth (RGBD) data are predominant in many domains such as robotics, autonomous driving and remote sensing. The combination of these multi-modal data enhances environmental perception by providing 3D spatial context, which is absent in standard RGB images. Although RGBD multi-modal data can be available to train computer vision models, accessing all sensor modalities during the inference stage may be infeasible due to sensor failures or resource constraints, leading to a mismatch between data modalities available during training and inference. Traditional Cross-Modal Knowledge Distillation (CMKD) frameworks, developed to address this task, are typically based on a teacher/student paradigm, where a multi-modal teacher distills knowledge into a single-modality student model. However, these approaches face challenges in teacher architecture choices and distillation process selection, thus limiting their adoption in real-world scenarios. To overcome these issues, we introduce CroDiNo-KD (Cross-Modal Disentanglement: a New Outlook on Knowledge Distillation), a novel cross-modal knowledge distillation framework for RGBD semantic segmentation. Our approach simultaneously learns single-modality RGB and Depth models by exploiting disentanglement representation, contrastive learning and decoupled data augmentation with the aim to structure the internal manifolds of neural network models through interaction and collaboration. We evaluated CroDiNo-KD on three RGBD datasets across diverse domains, considering recent CMKD frameworks as competitors. Our findings illustrate the quality of CroDiNo-KD, and they suggest reconsidering the conventional teacher/student paradigm to distill information from multi-modal data to single-modality neural networks.
Problem

Research questions and friction points this paper is trying to address.

Addresses modality mismatch between training and inference in RGBD semantic segmentation
Overcomes limitations of traditional teacher/student cross-modal knowledge distillation
Proposes disentanglement-based framework for single-modality learning from multi-modal data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentanglement representation for RGBD learning
Contrastive learning enhances modality collaboration
Decoupled data augmentation structures neural manifolds
πŸ”Ž Similar Papers
No similar papers found.
R
Roger Ferrod
University of Turin, Turin, Italy
C
C. Dantas
INRAE, UMR TETIS, Univ. Montpellier, Montpellier, France; EVERGREEN, Univ. Montpellier, Inria, Montpellier, France
Luigi Di Caro
Luigi Di Caro
Associate Professor
data miningnatural language processing
Dino Ienco
Dino Ienco
UMR TETIS, EVERGREEN, INRAE, INRIA
Machine LearningDeep LearningTime SeriesRemote Sensing