🤖 AI Summary
This study addresses the underperformance of Vision-Language Large Models (VLLMs) on fine-grained visual and spatial reasoning tasks, a limitation whose root cause remains unclear. By constructing synthetic visual benchmark datasets and introducing a quantitative metric for visual redundancy, the work systematically investigates how task complexity influences visual token compression mechanisms and representation distributions within VLLMs. The findings reveal that training with high-complexity tasks significantly reshapes the distribution of visual tokens, thereby enhancing model performance on challenging visual reasoning tasks. This research uncovers a critical link between task complexity and visual token compression, suggesting that incorporating a sufficient proportion of high-complexity data is essential for optimizing the visual representation capabilities of VLLMs.
📝 Abstract
Vision capabilities in vision large language models (VLLMs) have consistently lagged behind their linguistic capabilities. In particular, numerous benchmark studies have demonstrated that VLLMs struggle when fine-grained visual information or spatial reasoning is required. However, we do not yet understand exactly why VLLMs struggle so much with these tasks relative to others. Some works have focused on visual redundancy as an explanation, where high-level visual information is uniformly spread across numerous tokens and specific, fine-grained visual information is discarded. In this work, we investigate this premise in greater detail, seeking to better understand exactly how various types of visual information are processed by the model and what types of visual information are discarded. To do so, we introduce a simple synthetic benchmark dataset that is specifically constructed to probe various visual features, along with a set of metrics for measuring visual redundancy, allowing us to better understand the nuances of their relationship. Then, we explore fine-tuning VLLMs on a number of complex visual tasks to better understand how redundancy and compression change based upon the complexity of the data that a model is trained on. We find that there is a connection between task complexity and visual compression, implying that having a sufficient ratio of high complexity visual data is crucial for altering the way that VLLMs distribute their visual representation and consequently improving their performance on complex visual tasks. We hope that this work will provide valuable insights for training the next generation of VLLMs.