🤖 AI Summary
This work addresses the low inference throughput of autoregressive vision-language models on edge devices, which hinders their deployment in real-time physical AI applications such as robotics and autonomous driving. The authors propose a direct conversion strategy that efficiently transforms pretrained autoregressive models into a parallel decoding architecture supporting block-wise diffusion, preserving multimodal capabilities while enabling fast generation. By integrating multimodal diffusion adaptation techniques—including causal contextual attention, block-size annealing, auto-truncated masks, and visually efficient token stitching—and combining them with FP8 quantization and SGLang integration, they achieve the first vision-language block diffusion framework compatible with KV caching. The method matches the generation quality of autoregressive baselines across 11 benchmarks while delivering over 6× end-to-end inference speedup.
📝 Abstract
Vision-language models (VLMs) predominantly rely on autoregressive decoding, which generates tokens one at a time and fundamentally limits inference throughput. This limitation is especially acute in physical AI scenarios such as robotics and autonomous driving, where VLMs are deployed on edge devices at batch size one, making AR decoding memory-bandwidth-bound and leaving hardware parallelism underutilized. While block-wise discrete diffusion has shown promise for parallel text generation, extending it to VLMs remains challenging due to the need to jointly handle continuous visual representations and discrete text tokens while preserving pretrained multimodal capabilities. We present Fast-dVLM, a block-diffusion-based VLM that enables KV-cache-compatible parallel decoding and speculative block decoding for inference acceleration. We systematically compare two AR-to-diffusion conversion strategies: a two-stage approach that first adapts the LLM backbone with text-only diffusion fine-tuning before multimodal training, and a direct approach that converts the full AR VLM in one stage. Under comparable training budgets, direct conversion proves substantially more efficient by leveraging the already multimodally aligned VLM; we therefore adopt it as our recommended recipe. We introduce a suite of multimodal diffusion adaptations, block size annealing, causal context attention, auto-truncation masking, and vision efficient concatenation, that collectively enable effective block diffusion in the VLM setting. Extensive experiments across 11 multimodal benchmarks show Fast-dVLM matches its autoregressive counterpart in generation quality. With SGLang integration and FP8 quantization, Fast-dVLM achieves over 6x end-to-end inference speedup over the AR baseline.