🤖 AI Summary
This work addresses the limited capability of existing models in fine-grained image understanding and descriptive captioning by introducing pixel-level panoptic image captioning—a novel task requiring dense, semantically grounded textual descriptions aligned with every pixel. We present Pix2Cap-COCO, the first instance-aligned, context-aware panoptic pixel-level dataset comprising 167K samples, and propose a GPT-4V-driven automatic annotation pipeline for precise pixel-caption alignment. To enable end-to-end joint modeling of panoptic segmentation and image captioning, we adapt the X-Decoder architecture, incorporate supervised fine-tuning, and introduce a pixel–instance–text tripartite alignment strategy. Evaluations show a +1.4 CIDEr improvement on Visual Genome and a +5.1 overall gain on ViP-BENCH—comprising +11.2% recognition accuracy and +22.2% language quality—marking significant progress in pixel-level vision-language understanding and generation.
📝 Abstract
We present Pix2Cap-COCO, the first panoptic pixel-level caption dataset designed to advance fine-grained visual understanding. To achieve this, we carefully design an automated annotation pipeline that prompts GPT-4V to generate pixel-aligned, instance-specific captions for individual objects within images, enabling models to learn more granular relationships between objects and their contexts. This approach results in 167,254 detailed captions, with an average of 22.94 words per caption. Building on Pix2Cap-COCO, we introduce a novel task, panoptic segmentation-captioning, which challenges models to recognize instances in an image and provide detailed descriptions for each simultaneously. To benchmark this task, we design a robust baseline based on X-Decoder. The experimental results demonstrate that Pix2Cap-COCO is a particularly challenging dataset, as it requires models to excel in both fine-grained visual understanding and detailed language generation. Furthermore, we leverage Pix2Cap-COCO for Supervised Fine-Tuning (SFT) on large multimodal models (LMMs) to enhance their performance. For example, training with Pix2Cap-COCO significantly improves the performance of GPT4RoI, yielding gains in CIDEr +1.4%, ROUGE +0.4%, and SPICE +0.5% on Visual Genome dataset, and strengthens its region understanding ability on the ViP-BENCH, with an overall improvement of +5.1%, including notable increases in recognition accuracy +11.2% and language generation quality +22.2%.