🤖 AI Summary
Traditional position encodings (e.g., raster-scan, RoPE) in vision-language models (VLMs) struggle to simultaneously support long-range dependency modeling and multi-granularity perception. To address this, we propose Pyramid Position Encoding (PyPE), a hierarchical, inward-ordered positional encoding scheme that progressively expands the central receptive field—thereby decoupling spatial relationships among visual tokens from fixed anchor dependencies and enabling joint modeling of local details and global structure. PyPE integrates three key innovations: (1) a modified rotary position embedding for enhanced angular sensitivity; (2) a multi-scale receptive field design; and (3) attention weight recalibration to emphasize geometrically consistent token interactions. Extensive experiments across VLMs of varying scales demonstrate consistent and significant improvements on diverse vision-language tasks—including visual question answering (VQA), image captioning, and referring expression comprehension. The implementation is publicly available.
📝 Abstract
Vision-language Models (VLMs) have shown remarkable capabilities in advancing general artificial intelligence, yet the irrational encoding of visual positions persists in inhibiting the models' comprehensive perception performance across different levels of granularity. In this work, we propose Pyramid-descent Visual Position Encoding (PyPE), a novel approach designed to enhance the perception of visual tokens within VLMs. By assigning visual position indexes from the periphery to the center and expanding the central receptive field incrementally, PyPE addresses the limitations of traditional raster-scan methods and mitigates the long-term decay effects induced by Rotary Position Embedding (RoPE). Our method reduces the relative distance between interrelated visual elements and instruction tokens, promoting a more rational allocation of attention weights and allowing for a multi-granularity perception of visual elements and countering the over-reliance on anchor tokens. Extensive experimental evaluations demonstrate that PyPE consistently improves the general capabilities of VLMs across various sizes. Code is available at https://github.com/SakuraTroyChen/PyPE.