PADE: A Predictor-Free Sparse Attention Accelerator via Unified Execution and Stage Fusion

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the hardware efficiency bottleneck caused by the quadratic computational and memory overhead of self-attention, this paper proposes a dynamic sparsity acceleration framework that achieves algorithm–hardware co-optimization—without requiring a dedicated sparsity predictor. Our approach introduces three key innovations: (1) bit-level uncertainty-interval guarding, which prunes low-correlation token pairs with zero prediction overhead; (2) bidirectional sparsity-aware out-of-order execution, improving hardware utilization; and (3) interleaved sparse block attention, enhancing data reuse. Built upon bit-serial phase fusion (BSF), BUI-GF, BS-OOE, ISTA, and a custom sparse accelerator architecture, our method achieves 7.43× speedup and 31.1× energy efficiency improvement over NVIDIA H100 across 22 benchmarks, and reduces energy consumption by 5.1×, 4.3×, and 3.4× versus Sanger, DOTA, and SOFA, respectively.

Technology Category

Application Category

📝 Abstract
Attention-based models have revolutionized AI, but the quadratic cost of self-attention incurs severe computational and memory overhead. Sparse attention methods alleviate this by skipping low-relevance token pairs. However, current approaches lack practicality due to the heavy expense of added sparsity predictor, which severely drops their hardware efficiency. This paper advances the state-of-the-art (SOTA) by proposing a bit-serial enable stage-fusion (BSF) mechanism, which eliminates the need for a separate predictor. However, it faces key challenges: 1) Inaccurate bit-sliced sparsity speculation leads to incorrect pruning; 2) Hardware under-utilization due to fine-grained and imbalanced bit-level workloads. 3) Tiling difficulty caused by the row-wise dependency in sparsity pruning criteria. We propose PADE, a predictor-free algorithm-hardware co-design for dynamic sparse attention acceleration. PADE features three key innovations: 1) Bit-wise uncertainty interval-enabled guard filtering (BUI-GF) strategy to accurately identify trivial tokens during each bit round; 2) Bidirectional sparsity-based out-of-order execution (BS-OOE) to improve hardware utilization; 3) Interleaving-based sparsity-tiled attention (ISTA) to reduce both I/O and computational complexity. These techniques, combined with custom accelerator designs, enable practical sparsity acceleration without relying on an added sparsity predictor. Extensive experiments on 22 benchmarks show that PADE achieves 7.43x speed up and 31.1x higher energy efficiency than Nvidia H100 GPU. Compared to SOTA accelerators, PADE achieves 5.1x, 4.3x and 3.4x energy saving than Sanger, DOTA and SOFA.
Problem

Research questions and friction points this paper is trying to address.

Eliminates need for separate sparsity predictor in attention models
Improves hardware utilization via out-of-order execution for sparse workloads
Reduces computational and I/O complexity with interleaving-based tiling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bit-wise uncertainty interval guard filtering for accurate token pruning
Bidirectional sparsity-based out-of-order execution to boost hardware utilization
Interleaving-based sparsity-tiled attention to reduce I/O and compute complexity
🔎 Similar Papers
No similar papers found.
Huizheng Wang
Huizheng Wang
Tsinghua University
Sparse AttentionLLM acceleratorAI InfraDistrbited ParallelismVLSI
H
Hongbin Wang
School of Integrated Circuits, BNRist, Tsinghua University, Beijing, China, 100084
Z
Zichuan Wang
School of Integrated Circuits, BNRist, Tsinghua University, Beijing, China, 100084
Z
Zhiheng Yue
School of Integrated Circuits, BNRist, Tsinghua University, Beijing, China, 100084
Y
Yang Wang
School of Integrated Circuits, BNRist, Tsinghua University, Beijing, China, 100084
C
Chao Li
School of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China, 200240
Y
Yang Hu
School of Integrated Circuits, BNRist, Tsinghua University, Beijing, China, 100084
Shouyi Yin
Shouyi Yin
Tsinghua University