UniFormer: Unified and Efficient Transformer for Reasoning Across General and Custom Computing

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural networks face a three-way trade-off among efficiency, implementation complexity, and accuracy when deployed across heterogeneous computing platforms—particularly between general-purpose hardware (e.g., GPUs) and domain-specific accelerators (e.g., FPGAs)—due to fundamental differences in parallelism models. To address this, we propose UniFormer, the first Transformer architecture explicitly designed for cross-platform unified optimization. UniFormer achieves hardware-aware co-adaptation at the model level via highly parallel structural design and tight compute–memory integration. Crucially, it departs from the conventional “design-then-adapt” paradigm by enabling joint architectural and platform-level optimization. Experimental results demonstrate that UniFormer attains state-of-the-art accuracy with low latency on GPUs, while simultaneously achieving significantly higher resource utilization and inference throughput on FPGAs. Overall, it substantially improves cross-platform deployment efficiency and practical applicability.

Technology Category

Application Category

📝 Abstract
The success of neural networks such as convolutional neural networks (CNNs) has been largely attributed to their effective and widespread deployment on customised computing platforms, including field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). In the current era, Transformer-based architectures underpin the majority of state-of-the-art (SOTA) larger models that are also increasingly deployed on customised computing hardware for low-power and real-time applications. However, the fundamentally different parallel computation paradigms between general-purpose and customised computing often lead to compromises in model transfer and deployability, which typically come at the cost of complexity, efficiency or accuracy. Moreover, many cross-platform optimisation principles have also remained underexplored in existing studies. This paper introduces UniFormer, a unified and efficient Transformer architecture for both general-purpose and customised computing platforms. By enabling higher parallelism and compute-storage fusion, UniFormer achieves state-of-the-art (SOTA) accuracy and latency on GPUs while exhibiting strong adaptability on FPGAs. To the best of our knowledge, this paper is the first efficient Transformer work that jointly considers both general-purpose and customised computing architectures.
Problem

Research questions and friction points this paper is trying to address.

Addressing model transfer compromises between general and custom computing
Overcoming efficiency and accuracy trade-offs in cross-platform deployment
Exploring underexplored optimization principles for Transformer architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Transformer for general and custom computing
Enables higher parallelism and compute-storage fusion
Achieves SOTA accuracy and latency across platforms
🔎 Similar Papers
No similar papers found.
Z
Zhuoheng Ran
Department of Electrical Engineering, City University of Hong Kong
Chong Wu
Chong Wu
Department of Electrical Engineering, City University of Hong Kong
Renjie Xu
Renjie Xu
Cityu-Oxford Joint Centre for Intelligent Multidimensional Data Analysis
M
Maolin Che
School of Mathematics and Statistics, Guizhou University
H
Hong Yan
Department of Electrical Engineering, City University of Hong Kong