AdaPerceiver: Transformers with Adaptive Width, Depth, and Tokens

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Transformer inference lacks cross-dimensional computational adaptability, struggling to simultaneously satisfy hardware constraints and latency sensitivity. This paper proposes AdaPerceiver—the first unified architecture enabling joint adaptation across depth, width, and token count—achieved via scalable network design and efficient co-training to dynamically select computation paths within a single model. Its key contribution lies in overcoming the conventional limitation of dynamic computation to a single dimension, thereby significantly expanding the accuracy-throughput trade-off space. Experiments demonstrate that AdaPerceiver achieves 85.4% top-1 accuracy on image classification with 36% higher throughput than FlexiViT-L; matches ViT-H/14 performance on dense prediction tasks while reducing encoder FLOPs by 26×; and maintains ImageNet-1K accuracy while cutting FLOPs by 24–33%.

Technology Category

Application Category

📝 Abstract
Modern transformer architectures achieve remarkable performance across tasks and domains but remain rigid in how they allocate computation at inference time. Real-world deployment often requires models to adapt to diverse hardware and latency constraints, yet most approaches to dynamic computation focus on a single axis -- such as reducing the number of tokens. We present a novel capability: AdaPerceiver, the first transformer architecture with unified adaptivity across depth, width, and tokens within a single model. We propose an architecture that supports adaptivity along these axes. We couple this with an efficient joint training regime that ensures the model maintains performance across its various configurations. We evaluate AdaPerceiver on image classification, semantic segmentation, and depth estimation tasks. On image classification, AdaPerceiver expands the accuracy-throughput Pareto front. It achieves 85.4% accuracy while yielding 36% higher throughput than FlexiViT-L. On dense prediction, AdaPerceiver matches ViT-H/14 while having $sim$26x fewer encoder FLOPs (floating-point operations) on semantic segmentation and depth estimation. Finally, we show how AdaPerceiver equipped with a policy can maintain ImageNet1K accuracy ($pm0.1$ percentage points) while reducing FLOPs by $24-33$%.
Problem

Research questions and friction points this paper is trying to address.

Transformers lack adaptive computation for diverse hardware constraints
Dynamic computation approaches only focus on single adaptation axis
Need unified adaptivity across depth width tokens in single model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer adapts depth, width, and tokens
Efficient joint training maintains performance across configurations
Policy reduces FLOPs while preserving accuracy
🔎 Similar Papers
No similar papers found.
Purvish Jajal
Purvish Jajal
Purdue University
Deep Learning
N
Nick John Eliopoulos
Purdue University, West Lafayette, IN, USA
Benjamin Shiue-Hal Chou
Benjamin Shiue-Hal Chou
PhD student, Purdue University
Music and Artificial IntelligenceComputer Vision
G
George K. Thiruvathukal
Loyola University Chicago, Chicago, IL, USA
Y
Yung-Hsiang Lu
Purdue University, West Lafayette, IN, USA
J
James C. Davis
Purdue University, West Lafayette, IN, USA