🤖 AI Summary
Global self-attention incurs quadratic computational and memory overhead in high-resolution images; while local attention reduces complexity, conventional window partitioning yields discontinuous token sequences, hindering efficient sparse acceleration. This work introduces the Hilbert curve to vision Transformers for the first time, leveraging its space-filling property to reorder image tokens spatially—thereby significantly enhancing block sparsity and improving hardware efficiency of local attention. Based on this ordering, we propose two novel attention mechanisms: Hilbert Window Attention and Hilbert Slide Attention, both fully compatible with existing sparse kernel libraries. Experiments demonstrate that these mechanisms achieve approximately 4× and 18× speedups over standard local attention, respectively, enabling substantially faster end-to-end inference with negligible accuracy degradation.
📝 Abstract
The quadratic compute and memory costs of global self-attention severely limit its use in high-resolution images. Local attention reduces complexity by restricting attention to neighborhoods. Block-sparse kernels can further improve the efficiency of local attention, but conventional local attention patterns often fail to deliver significant speedups because tokens within a window are not contiguous in the 1D sequence. This work proposes a novel method for constructing windows and neighborhoods based on the Hilbert curve. Image tokens are first reordered along a Hilbert curve, and windows and neighborhoods are then formed on the reordered 1D sequence. From a block-sparse perspective, this strategy significantly increases block sparsity and can be combined with existing block-sparse kernels to improve the efficiency of 2D local attention. Experiments show that the proposed Hilbert Window Attention and Hilbert Slide Attention can accelerate window attention and slide attention by about $4 imes$ and $18 imes$, respectively. To assess practicality, the strategy is instantiated as the Hilbert Window Transformer and the Hilbert Neighborhood Transformer, both of which achieve end-to-end speedups with minimal accuracy loss. Overall, combining Hilbert-guided local attention with block-sparse kernels offers a general and practical approach to enhancing the efficiency of 2D local attention for images. The code is available at https://github.com/Yunge6666/Hilbert-Local-Attention.