Fourier-VLM: Compressing Vision Tokens in the Frequency Domain for Large Vision-Language Models

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive visual token count in vision-language models (VLMs)—which leads to long context lengths, high computational overhead, and significant inference latency—this paper proposes a parameter-free, low-overhead frequency-domain compression method. Specifically, it introduces two-dimensional discrete cosine transform (2D-DCT) into visual token compression for the first time, leveraging the energy concentration of visual features in low-frequency components; low-pass filtering retains only essential low-frequency coefficients, drastically shortening the visual sequence length. Unlike existing approaches relying on learnable queries or importance sampling, our method requires no additional parameters or training and incurs negligible computational cost. Experiments on LLaVA and Qwen-VL demonstrate competitive performance against state-of-the-art methods, with an 83.8% reduction in inference FLOPs and a 31.2% increase in generation speed—achieving an optimal balance among efficiency, accuracy, and generalization.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) typically replace the predefined image placeholder token (<image>) in textual instructions with visual features from an image encoder, forming the input to a backbone Large Language Model (LLM). However, the large number of vision tokens significantly increases the context length, leading to high computational overhead and inference latency. While previous efforts mitigate this by selecting only important visual features or leveraging learnable queries to reduce token count, they often compromise performance or introduce substantial extra costs. In response, we propose Fourier-VLM, a simple yet efficient method that compresses visual representations in the frequency domain. Our approach is motivated by the observation that vision features output from the vision encoder exhibit concentrated energy in low-frequency components. Leveraging this, we apply a low-pass filter to the vision features using a two-dimentional Discrete Cosine Transform (DCT). Notably, the DCT is efficiently computed via the Fast Fourier Transform (FFT) operator with a time complexity of $mathcal{O}(nlog n)$, minimizing the extra computational cost while introducing no additional parameters. Extensive experiments across various image-based benchmarks demonstrate that Fourier-VLM achieves competitive performance with strong generalizability across both LLaVA and Qwen-VL architectures. Crucially, it reduce inference FLOPs by up to 83.8% and boots generation speed by 31.2% compared to LLaVA-v1.5, highlighting the superior efficiency and practicality.
Problem

Research questions and friction points this paper is trying to address.

Reduces high computational overhead from vision tokens
Compresses visual features in frequency domain efficiently
Maintains performance while improving inference speed significantly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compresses vision tokens in frequency domain
Uses low-pass filter with DCT and FFT
Reduces FLOPs by 83.8%, speeds generation 31.2%
H
Huanyu Wang
Shanghai Jiao Tong University, Shanghai, China
Jushi Kai
Jushi Kai
Shanghai Jiao Tong University
Language ModelingLLMLong Context
Haoli Bai
Haoli Bai
Huawei Technologies
natural language processingmodel compression
L
Lu Hou
Noah’s Ark Lab, Huawei Technologies Ltd., Shanghai, China
B
Bo Jiang
Shanghai Jiao Tong University, Shanghai, China
Ziwei He
Ziwei He
Shanghai Jiao Tong University
Machine Learning
Z
Zhouhan Lin
Shanghai Jiao Tong University, Shanghai, China; Shanghai Innovation Institute, Shanghai, China