BHViT: Binarized Hybrid Vision Transformer

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the severe accuracy degradation of Vision Transformers (ViTs) under full binarization—a challenge stemming from fundamental architectural disparities between ViTs and CNNs. To enable efficient edge deployment, we propose the first fully binarized hybrid ViT architecture. Methodologically, we introduce: (i) hierarchical coarse-to-fine token aggregation to preserve structural semantics; (ii) shift-augmented binary MLPs for enhanced nonlinearity; (iii) quantization-decomposition-based binary attention to mitigate attention collapse; and (iv) an oscillation-regularized loss function tailored for Adam optimization to stabilize training. Collectively, these techniques alleviate information loss and training instability inherent in binarization. Evaluated on ImageNet, our method achieves state-of-the-art accuracy among binary ViTs, with 3.2× faster inference latency and 5.7× higher energy efficiency—demonstrating a compelling balance among model compactness, hardware compatibility, and representational capacity.

Technology Category

Application Category

📝 Abstract
Model binarization has made significant progress in enabling real-time and energy-efficient computation for convolutional neural networks (CNN), offering a potential solution to the deployment challenges faced by Vision Transformers (ViTs) on edge devices. However, due to the structural differences between CNN and Transformer architectures, simply applying binary CNN strategies to the ViT models will lead to a significant performance drop. To tackle this challenge, we propose BHViT, a binarization-friendly hybrid ViT architecture and its full binarization model with the guidance of three important observations. Initially, BHViT utilizes the local information interaction and hierarchical feature aggregation technique from coarse to fine levels to address redundant computations stemming from excessive tokens. Then, a novel module based on shift operations is proposed to enhance the performance of the binary Multilayer Perceptron (MLP) module without significantly increasing computational overhead. In addition, an innovative attention matrix binarization method based on quantization decomposition is proposed to evaluate the token's importance in the binarized attention matrix. Finally, we propose a regularization loss to address the inadequate optimization caused by the incompatibility between the weight oscillation in the binary layers and the Adam Optimizer. Extensive experimental results demonstrate that our proposed algorithm achieves SOTA performance among binary ViT methods.
Problem

Research questions and friction points this paper is trying to address.

Enables real-time, energy-efficient computation for Vision Transformers on edge devices.
Addresses performance drop in binarized Vision Transformers due to structural differences.
Proposes novel techniques for binarization-friendly hybrid ViT architecture and optimization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local information interaction and hierarchical feature aggregation
Shift operations enhance binary MLP performance
Quantization decomposition for attention matrix binarization
🔎 Similar Papers
No similar papers found.
T
Tian Gao
University of Macau, Nanjing University of Science and Technology
Z
Zhiyuan Zhang
Singapore Management University
Y
Yu Zhang
Shanghai Jiaotong University
Huajun Liu
Huajun Liu
Nanjing university of science and technology
machine learningcomputer visioninformation fusionradar sensor
Kaijie Yin
Kaijie Yin
university of macau
Binary Neural NetworkComputer Vision
C
Chengzhong Xu
University of Macau
H
Hui Kong
University of Macau