🤖 AI Summary
Neural networks face a three-way trade-off among efficiency, implementation complexity, and accuracy when deployed across heterogeneous computing platforms—particularly between general-purpose hardware (e.g., GPUs) and domain-specific accelerators (e.g., FPGAs)—due to fundamental differences in parallelism models. To address this, we propose UniFormer, the first Transformer architecture explicitly designed for cross-platform unified optimization. UniFormer achieves hardware-aware co-adaptation at the model level via highly parallel structural design and tight compute–memory integration. Crucially, it departs from the conventional “design-then-adapt” paradigm by enabling joint architectural and platform-level optimization. Experimental results demonstrate that UniFormer attains state-of-the-art accuracy with low latency on GPUs, while simultaneously achieving significantly higher resource utilization and inference throughput on FPGAs. Overall, it substantially improves cross-platform deployment efficiency and practical applicability.
📝 Abstract
The success of neural networks such as convolutional neural networks (CNNs) has been largely attributed to their effective and widespread deployment on customised computing platforms, including field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). In the current era, Transformer-based architectures underpin the majority of state-of-the-art (SOTA) larger models that are also increasingly deployed on customised computing hardware for low-power and real-time applications. However, the fundamentally different parallel computation paradigms between general-purpose and customised computing often lead to compromises in model transfer and deployability, which typically come at the cost of complexity, efficiency or accuracy. Moreover, many cross-platform optimisation principles have also remained underexplored in existing studies. This paper introduces UniFormer, a unified and efficient Transformer architecture for both general-purpose and customised computing platforms. By enabling higher parallelism and compute-storage fusion, UniFormer achieves state-of-the-art (SOTA) accuracy and latency on GPUs while exhibiting strong adaptability on FPGAs. To the best of our knowledge, this paper is the first efficient Transformer work that jointly considers both general-purpose and customised computing architectures.