🤖 AI Summary
This paper addresses the weak numerical perception and overreliance on category-level annotation priors in open-vocabulary, text-prompted object counting. To this end, we propose QUANet—a novel framework for zero-shot, category-agnostic counting. Our key contributions are: (1) quantity-aware textual prompting to explicitly encode numerical semantics; (2) a vision–text quantity alignment loss enforcing cross-modal numerical consistency; (3) a dual-stream adaptive decoder with cross-stream quantity ranking loss to jointly model density estimation and ordinal relationships among counts; and (4) T2C-adapters that enable Transformer–CNN dual-stream cross-modal knowledge aggregation, unifying density map prediction and contrastive learning. Extensive experiments demonstrate state-of-the-art performance on FSC-147 and CARPK under zero-shot, category-agnostic settings, significantly improving generalization to unseen categories and numerical scales.
📝 Abstract
Recent advances in large vision-language models (VLMs) have shown remarkable progress in solving the text-promptable object counting problem. Representative methods typically specify text prompts with object category information in images. This however is insufficient for training the model to accurately distinguish the number of objects in the counting task. To this end, we propose QUANet, which introduces novel quantity-oriented text prompts with a vision-text quantity alignment loss to enhance the model's quantity awareness. Moreover, we propose a dual-stream adaptive counting decoder consisting of a Transformer stream, a CNN stream, and a number of Transformer-to-CNN enhancement adapters (T2C-adapters) for density map prediction. The T2C-adapters facilitate the effective knowledge communication and aggregation between the Transformer and CNN streams. A cross-stream quantity ranking loss is proposed in the end to optimize the ranking orders of predictions from the two streams. Extensive experiments on standard benchmarks such as FSC-147, CARPK, PUCPR+, and ShanghaiTech demonstrate our model's strong generalizability for zero-shot class-agnostic counting. Code is available at https://github.com/viscom-tongji/QUANet