🤖 AI Summary
Vision Transformers (ViTs) suffer from quadratic computational complexity in self-attention, hindering their efficiency and scalability for semantic segmentation. To address this, we propose a dynamic token merging framework guided by pseudo-clustering supervision: a learnable Cluster module aggregates semantically similar tokens under supervision from pseudo-clusters derived from ground-truth segmentation masks; a Regenerator module then restores fine-grained local details to preserve structural fidelity. To our knowledge, this is the first work to jointly leverage dynamic token merging and pseudo-clustering supervision for semantic segmentation. Evaluated on ADE20K, Cityscapes, and PASCAL-Context, our method achieves up to 2.18× reduction in GFLOPs and 1.64× inference speedup, with negligible accuracy degradation—demonstrating an effective trade-off between efficiency and segmentation quality.
📝 Abstract
Vision Transformers can achieve high accuracy and strong generalization across various contexts, but their practical applicability on real-world robotic systems is limited due to their quadratic attention complexity. Recent works have focused on dynamically merging tokens according to the image complexity. Token merging works well for classification but is less suited to dense prediction. We propose ClustViT, where we expand upon the Vision Transformer (ViT) backbone and address semantic segmentation. Within our architecture, a trainable Cluster module merges similar tokens along the network guided by pseudo-clusters from segmentation masks. Subsequently, a Regenerator module restores fine details for downstream heads. Our approach achieves up to 2.18x fewer GFLOPs and 1.64x faster inference on three different datasets, with comparable segmentation accuracy. Our code and models will be made publicly available.