🤖 AI Summary
Existing cross-modal alignment methods for visual classification often fail to achieve one-step feature projection due to inherent discrepancies between vision and text modalities—particularly in class distribution and feature scale. To address this, we propose a two-stage diffusion-based alignment framework mediated by a shared semantic space: (1) a diffusion-controlled semantic learner constructs modality-invariant semantic center representations; (2) a diffusion translator jointly with a progressive feature interaction network enables stepwise calibration of visual features toward the textual distribution. Our work introduces, for the first time, a semantic-space mediation mechanism and a dual-stage diffusion alignment paradigm, integrating class-level semantic modeling with cross-modal distribution constraints. Extensive experiments on multiple visual classification benchmarks demonstrate substantial improvements in alignment quality and state-of-the-art performance over existing methods.
📝 Abstract
Cross-modal alignment is an effective approach to improving visual classification. Existing studies typically enforce a one-step mapping that uses deep neural networks to project the visual features to mimic the distribution of textual features. However, they typically face difficulties in finding such a projection due to the two modalities in both the distribution of class-wise samples and the range of their feature values. To address this issue, this paper proposes a novel Semantic-Space-Intervened Diffusive Alignment method, termed SeDA, models a semantic space as a bridge in the visual-to-textual projection, considering both types of features share the same class-level information in classification. More importantly, a bi-stage diffusion framework is developed to enable the progressive alignment between the two modalities. Specifically, SeDA first employs a Diffusion-Controlled Semantic Learner to model the semantic features space of visual features by constraining the interactive features of the diffusion model and the category centers of visual features. In the later stage of SeDA, the Diffusion-Controlled Semantic Translator focuses on learning the distribution of textual features from the semantic space. Meanwhile, the Progressive Feature Interaction Network introduces stepwise feature interactions at each alignment step, progressively integrating textual information into mapped features. Experimental results show that SeDA achieves stronger cross-modal feature alignment, leading to superior performance over existing methods across multiple scenarios.