🤖 AI Summary
Vision Transformers (ViTs) suffer from quadratic computational complexity in global self-attention, while existing spatial grouping methods ignore semantic correlations, often splitting semantically related tokens. To address this, we propose Semantic-Equilibrium Clustering (SEC): a single-pass, non-iterative hard-constraint clustering method that groups tokens based on global semantic similarity, ensuring both semantic coherence and load balancing across clusters. SEC explicitly enforces cluster capacity constraints to jointly optimize modeling capability and hardware-friendly parallelism. We integrate SEC into a novel ViT backbone—SECViT—and design a vision-language interface tailored for multimodal large language models (MLLMs). Extensive experiments demonstrate that SECViT consistently outperforms baselines on image classification, detection, and segmentation. Moreover, when seamlessly integrated with MLLMs such as LLaVA, it achieves significant inference acceleration while maintaining or even improving performance.
📝 Abstract
The Vision Transformer (ViT) has gained prominence for its superior relational modeling prowess. However, its global attention mechanism's quadratic complexity poses substantial computational burdens. A common remedy spatially groups tokens for self-attention, reducing computational requirements. Nonetheless, this strategy neglects semantic information in tokens, possibly scattering semantically-linked tokens across distinct groups, thus compromising the efficacy of self-attention intended for modeling inter-token dependencies. Motivated by these insights, we introduce a fast and balanced clustering method, named extbf{S}emantic extbf{E}quitable extbf{C}lustering (SEC). SEC clusters tokens based on their global semantic relevance in an efficient, straightforward manner. In contrast to traditional clustering methods requiring multiple iterations, our method achieves token clustering in a single pass. Additionally, SEC regulates the number of tokens per cluster, ensuring a balanced distribution for effective parallel processing on current computational platforms without necessitating further optimization. Capitalizing on SEC, we propose a versatile vision backbone, SECViT. Comprehensive experiments in image classification, object detection, instance segmentation, and semantic segmentation validate the effectiveness of SECViT. Moreover, SEC can be conveniently and swiftly applied to multimodal large language models (MLLM), such as LLaVA, to serve as a vision language connector, effectively accelerating the model's efficiency while maintaining unchanged or better performance.