CropVLM: Learning to Zoom for Fine-Grained Vision-Language Perception

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) suffer from visual fragmentation and insufficient perceptual resolution in fine-grained understanding tasks such as scene text recognition and document analysis. To address this, we propose a learnable dynamic local focusing mechanism that employs reinforcement learning–driven unsupervised cropping to automatically localize and faithfully extract discriminative image regions—without requiring bounding-box annotations or task-specific model fine-tuning. The mechanism is plug-and-play, compatible with diverse open- and closed-source VLMs, and supports multi-scale feature fusion and high-resolution local modeling. Experiments demonstrate substantial performance gains across cross-domain fine-grained tasks, particularly strong generalization to unseen datasets, and effective mitigation of catastrophic forgetting. Our approach establishes a lightweight, general-purpose, and scalable paradigm for enhancing fine-grained visual understanding in VLMs.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) often struggle with tasks that require fine-grained image understanding, such as scene-text recognition or document analysis, due to perception limitations and visual fragmentation. To address these challenges, we introduce CropVLM as an external low-cost method for boosting performance, enabling VLMs to dynamically ''zoom in'' on relevant image regions, enhancing their ability to capture fine details. CropVLM is trained using reinforcement learning, without using human-labeled bounding boxes as a supervision signal, and without expensive synthetic evaluations. The model is trained once and can be paired with both open-source and proprietary VLMs to improve their performance. Our approach delivers significant improvements on tasks that require high-resolution image understanding, notably for benchmarks that are out-of-domain for the target VLM, without modifying or fine-tuning the VLM, thus avoiding catastrophic forgetting.
Problem

Research questions and friction points this paper is trying to address.

Enhancing fine-grained image understanding in Vision-Language Models
Enabling dynamic zooming on relevant image regions without supervision
Improving performance on high-resolution tasks without model fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic zooming on image regions for detail enhancement
Reinforcement learning without human-labeled bounding boxes
Compatible with various VLMs without fine-tuning
🔎 Similar Papers
No similar papers found.
M
Miguel Carvalho
INESC-ID, Instituto Superior Técnico, University of Lisbon
H
Helder Dias
INESC-ID, Instituto Superior Técnico, University of Lisbon
Bruno Martins
Bruno Martins
Instituto Superior Técnico and INESC-ID, University of Lisbon
Data ScienceLanguage TechnologiesInformation RetrievalGeospatial A.I.