SJD-PV: Speculative Jacobi Decoding with Phrase Verification for Autoregressive Image Generation

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference latency of autoregressive image generation models, which stems from sequential token-by-token decoding. Existing training-free acceleration methods verify tokens independently, neglecting their strong co-occurrence patterns and thereby causing contextual inconsistencies that limit efficiency. To overcome this, we propose the first training-free, phrase-level speculative verification framework. By mining co-occurrence statistics of visual tokens from the training corpus, our method constructs semantically coherent visual phrases and jointly verifies multiple tokens within each decoding window. Acceptance of an entire phrase is determined by an aggregated likelihood ratio, departing from the conventional assumption of independent single-token verification. This approach effectively captures local co-occurrence regularities, reducing function evaluations by up to 30% in text-to-image generation while maintaining generation quality and significantly accelerating decoding.

Technology Category

Application Category

📝 Abstract
Autoregressive (AR) image models have recently demonstrated remarkable generative capability, but their sequential nature results in significant inference latency. Existing training-free acceleration methods typically verify tokens independently, overlooking the strong co-occurrence patterns between adjacent visual tokens. This independence assumption often leads to contextual inconsistency and limits decoding efficiency. In this work, we introduce a novel training-free acceleration framework that performs phrase-level speculative verification, enabling the model to jointly validate multiple correlated tokens within each decoding window. To construct such phrase units, we analyze token co-occurrence statistics from the training corpus and group frequently co-occurring tokens into semantically coherent visual phrases. During inference, the proposed phrase-level verification evaluates aggregated likelihood ratios over each phrase, allowing simultaneous acceptance of multiple tokens while preserving generation quality. Extensive experiments on autoregressive text-to-image generation show that our method significantly reduces the number of function evaluations (NFE) and achieves up to 30% faster decoding without compromising visual fidelity. Our findings reveal that modeling short-range token co-occurrence provides an effective and general principle for accelerating autoregressive inference.
Problem

Research questions and friction points this paper is trying to address.

autoregressive image generation
inference latency
token co-occurrence
decoding efficiency
contextual inconsistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

speculative decoding
phrase-level verification
autoregressive image generation
token co-occurrence
inference acceleration
🔎 Similar Papers
No similar papers found.
Z
Zhehao Yu
Harbin Institute of Technology, Shenzhen
Baoquan Zhang
Baoquan Zhang
Harbin Institute of Technology, Shenzhen
knowledge-guided machine learningmeta-learningfew-shot learning
B
Bingqi Shan
Harbin Institute of Technology, Shenzhen
X
Xinhao Liu
Harbin Institute of Technology, Shenzhen
D
Dongliang Zhou
Harbin Institute of Technology, Shenzhen
G
Guotao Liang
Harbin Institute of Technology, Shenzhen
G
Guangming Ye
SIFAR
Yunming Ye
Yunming Ye
Harbin Institute of Technology, Shenzhen, China
Mining Multimodal Data