Text-promptable Object Counting via Quantity Awareness Enhancement

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the weak numerical perception and overreliance on category-level annotation priors in open-vocabulary, text-prompted object counting. To this end, we propose QUANet—a novel framework for zero-shot, category-agnostic counting. Our key contributions are: (1) quantity-aware textual prompting to explicitly encode numerical semantics; (2) a vision–text quantity alignment loss enforcing cross-modal numerical consistency; (3) a dual-stream adaptive decoder with cross-stream quantity ranking loss to jointly model density estimation and ordinal relationships among counts; and (4) T2C-adapters that enable Transformer–CNN dual-stream cross-modal knowledge aggregation, unifying density map prediction and contrastive learning. Extensive experiments demonstrate state-of-the-art performance on FSC-147 and CARPK under zero-shot, category-agnostic settings, significantly improving generalization to unseen categories and numerical scales.

Technology Category

Application Category

📝 Abstract
Recent advances in large vision-language models (VLMs) have shown remarkable progress in solving the text-promptable object counting problem. Representative methods typically specify text prompts with object category information in images. This however is insufficient for training the model to accurately distinguish the number of objects in the counting task. To this end, we propose QUANet, which introduces novel quantity-oriented text prompts with a vision-text quantity alignment loss to enhance the model's quantity awareness. Moreover, we propose a dual-stream adaptive counting decoder consisting of a Transformer stream, a CNN stream, and a number of Transformer-to-CNN enhancement adapters (T2C-adapters) for density map prediction. The T2C-adapters facilitate the effective knowledge communication and aggregation between the Transformer and CNN streams. A cross-stream quantity ranking loss is proposed in the end to optimize the ranking orders of predictions from the two streams. Extensive experiments on standard benchmarks such as FSC-147, CARPK, PUCPR+, and ShanghaiTech demonstrate our model's strong generalizability for zero-shot class-agnostic counting. Code is available at https://github.com/viscom-tongji/QUANet
Problem

Research questions and friction points this paper is trying to address.

Enhancing quantity awareness in text-promptable object counting
Improving accuracy in distinguishing object numbers via novel prompts
Dual-stream decoder for effective density map prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantity-oriented text prompts enhance awareness
Dual-stream decoder with T2C-adapters improves prediction
Cross-stream ranking loss optimizes prediction orders
🔎 Similar Papers
No similar papers found.
Miaojing Shi
Miaojing Shi
Professor at Tongji University, Visiting Senior Lecturer at King's College London
Computer Vision
X
Xiaowen Zhang
College of Electronic and Information Engineering, Tongji University
Z
Zijie Yue
College of Electronic and Information Engineering, Tongji University
Yong Luo
Yong Luo
Wuhan University
Artifical IntelligenceMachine LearningData MiningPattern Classification and Search
Cairong Zhao
Cairong Zhao
Tongji University
deep learningcomputer visionperson re-id
L
Li Li
College of Electronic and Information Engineering, Tongji University