π€ AI Summary
In RGB-D semantic segmentation, a practical challenge arises from modality mismatch between multimodal training and unimodal inference. Existing cross-modal knowledge distillation (CMKD) methods rely on multimodal teacher models, limiting generalization and deployment flexibility. To address this, we propose CroDiNo-KDβthe first CMKD framework that eliminates the need for a multimodal teacher. Departing from conventional teacher-student paradigms, CroDiNo-KD introduces a novel unimodal collaborative training mechanism integrating decoupled representation learning, contrastive learning, decoupled data augmentation, and cross-modal feature alignment. Evaluated on three cross-domain RGB-D benchmarks, CroDiNo-KD consistently outperforms state-of-the-art CMKD approaches, demonstrating superior effectiveness, robustness, and strong generalization across diverse domains and modalities.
π Abstract
Multi-modal RGB and Depth (RGBD) data are predominant in many domains such as robotics, autonomous driving and remote sensing. The combination of these multi-modal data enhances environmental perception by providing 3D spatial context, which is absent in standard RGB images. Although RGBD multi-modal data can be available to train computer vision models, accessing all sensor modalities during the inference stage may be infeasible due to sensor failures or resource constraints, leading to a mismatch between data modalities available during training and inference. Traditional Cross-Modal Knowledge Distillation (CMKD) frameworks, developed to address this task, are typically based on a teacher/student paradigm, where a multi-modal teacher distills knowledge into a single-modality student model. However, these approaches face challenges in teacher architecture choices and distillation process selection, thus limiting their adoption in real-world scenarios. To overcome these issues, we introduce CroDiNo-KD (Cross-Modal Disentanglement: a New Outlook on Knowledge Distillation), a novel cross-modal knowledge distillation framework for RGBD semantic segmentation. Our approach simultaneously learns single-modality RGB and Depth models by exploiting disentanglement representation, contrastive learning and decoupled data augmentation with the aim to structure the internal manifolds of neural network models through interaction and collaboration. We evaluated CroDiNo-KD on three RGBD datasets across diverse domains, considering recent CMKD frameworks as competitors. Our findings illustrate the quality of CroDiNo-KD, and they suggest reconsidering the conventional teacher/student paradigm to distill information from multi-modal data to single-modality neural networks.