Scholar
Huizheng Wang
Google Scholar ID: hiubZEUAAAAJ
Tsinghua University
Sparse Attention
LLM accelerator
AI Infra
Distrbited Parallelism
VLSI
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
162
H-index
8
i10-index
8
Publications
20
Co-authors
2
list available
Contact
No contact links provided.
Publications
7 items
Designing Spatial Architectures for Sparse Attention: STAR Accelerator via Cross-Stage Tiling
2025
Cited
0
PADE: A Predictor-Free Sparse Attention Accelerator via Unified Execution and Stage Fusion
2025
Cited
0
TEMP: A Memory Efficient Physical-aware Tensor Partition-Mapping Framework on Wafer-scale Chips
2025
Cited
0
BitStopper: An Efficient Transformer Attention Accelerator via Stage-fusion and Early Termination
2025
Cited
0
LAPA: Log-Domain Prediction-Driven Dynamic Sparsity Accelerator for Transformer Model
2025
Cited
0
MoEntwine: Unleashing the Potential of Wafer-scale Chips for Large-scale Expert Parallel Inference
2025
Cited
0
MCBP: A Memory-Compute Efficient LLM Inference Accelerator Leveraging Bit-Slice-enabled Sparsity and Repetitiveness
2025
Cited
0
Resume (English only)
Co-authors
2 total
Yang HU
Associate Professor, Tsinghua University
Xiaohu You
东南大学信息通信教授
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up