🤖 AI Summary
In zero-shot learning (ZSL), visual features often contain semantics-irrelevant information, leading to ambiguous vision–semantics alignment and poor generalization to unseen classes. To address this, we propose an input-level semantic-aware patch filtering and replacement mechanism: prior to Transformer encoding, semantics-irrelevant image patches are dynamically identified and replaced via self-supervised patch selection, cross-layer attention-based scoring aggregation, and word-embedding-guided learnable patch embedding. A semantically initialized, learnable substitute embedding is introduced to preserve structural integrity while enhancing semantic consistency. To our knowledge, this is the first input-level intervention specifically designed to mitigate semantic misalignment in ZSL. Our method achieves state-of-the-art performance on standard ZSL benchmarks and yields more interpretable, semantically faithful visual representations.
📝 Abstract
Zero-shot learning (ZSL) aims to recognize unseen classes without labeled training examples by leveraging class-level semantic descriptors such as attributes. A fundamental challenge in ZSL is semantic misalignment, where semantic-unrelated information involved in visual features introduce ambiguity to visual-semantic interaction. Unlike existing methods that suppress semantic-unrelated information post hoc either in the feature space or the model space, we propose addressing this issue at the input stage, preventing semantic-unrelated patches from propagating through the network. To this end, we introduce Semantically contextualized VIsual Patches (SVIP) for ZSL, a transformer-based framework designed to enhance visual-semantic alignment. Specifically, we propose a self-supervised patch selection mechanism that preemptively learns to identify semantic-unrelated patches in the input space. This is trained with the supervision from aggregated attention scores across all transformer layers, which estimate each patch's semantic score. As removing semantic-unrelated patches from the input sequence may disrupt object structure, we replace them with learnable patch embeddings. With initialization from word embeddings, we can ensure they remain semantically meaningful throughout feature extraction. Extensive experiments on ZSL benchmarks demonstrate that SVIP achieves state-of-the-art performance results while providing more interpretable and semantically rich feature representations.