🤖 AI Summary
Existing REC benchmarks rely either on intra-image cues or coarse-grained annotations, failing to rigorously evaluate the genuine cross-modal reasoning capabilities of multimodal large language models (MLLMs). To address this, we propose K-REC—the first knowledge-driven, fine-grained REC benchmark. Our method introduces: (1) real-world knowledge-enhanced referring expression generation; (2) a fine-grained negative sample editing strategy grounded in semantic consistency; and (3) three novel evaluation metrics—decouplingness, knowledge sensitivity, and hallucination robustness—to quantify reasoning mechanisms underlying text–vision alignment. Extensive experiments across 16 state-of-the-art MLLMs reveal that current models predominantly rely on superficial memorization-based associations, exhibit severe decoupling between textual comprehension and visual grounding, and demonstrate weak knowledge integration. K-REC establishes a new paradigm for rigorously assessing and advancing deep cross-modal reasoning in MLLMs.
📝 Abstract
Referring Expression Comprehension (REC) is a popular multimodal task that aims to accurately detect target objects within a single image based on a given textual expression. However, due to the limitations of earlier models, traditional REC benchmarks either rely solely on intra-image cues or lack sufficiently fine-grained instance annotations, making them inadequate for evaluating the reasoning capabilities of Multi-modal Large Language Models (MLLMs). To address this gap, we propose a new benchmark, KnowDR-REC, characterized by three key features: Firstly, it is built upon real-world knowledge, requiring fine-grained multimodal reasoning across text and image. Secondly, the dataset includes elaborately constructed negative samples via fine-grained expression editing, designed to evaluate a model's robustness and anti-hallucination ability. Lastly, we introduce three novel evaluation metrics to systematically explore the model's internal reasoning process. We evaluate 16 state-of-the-art multimodal models on KnowDR-REC, with experimental results showing that existing MLLMs still struggle with knowledge-driven visual grounding tasks. Furthermore, we observe a decoupling between textual understanding and visual grounding in MLLMs, where many models are significantly influenced by memorized shortcut correlations, which severely affect their behavior on our benchmark and hinder genuine multimodal reasoning. We anticipate that the proposed benchmark will inspire future research towards developing more robust, interpretable, and knowledge-intensive visual grounding frameworks, driving the development of more reliable and robust multimodal systems for complex real-world scenarios.