🤖 AI Summary
This work addresses the challenge of reliably evaluating large vision-language models (VLMs) as automated similarity discriminators. We propose PairBench, the first systematic, low-overhead evaluation framework for this purpose. Methodologically, we introduce a cross-modal prompt comparison paradigm and define four quantifiable criteria: human alignment, order-agnostic consistency, similarity distribution smoothness, and prompt controllability—assessed via human-annotated benchmarks and correlation modeling across all four dimensions. Key contributions include: (1) the first empirical revelation that mainstream VLMs exhibit pervasive order asymmetry in pairwise similarity judgments; (2) identification of significant performance divergence across models on the four criteria; and (3) validation that PairBench scores correlate strongly with established benchmarks (Pearson’s *r* > 0.9), demonstrating high predictive validity and providing a principled basis for selecting appropriate VLM evaluation tools.
📝 Abstract
As large vision language models (VLMs) are increasingly used as automated evaluators, understanding their ability to effectively compare data pairs as instructed in the prompt becomes essential. To address this, we present PairBench, a low-cost framework that systematically evaluates VLMs as customizable similarity tools across various modalities and scenarios. Through PairBench, we introduce four metrics that represent key desiderata of similarity scores: alignment with human annotations, consistency for data pairs irrespective of their order, smoothness of similarity distributions, and controllability through prompting. Our analysis demonstrates that no model, whether closed- or open-source, is superior on all metrics; the optimal choice depends on an auto evaluator's desired behavior (e.g., a smooth vs. a sharp judge), highlighting risks of widespread adoption of VLMs as evaluators without thorough assessment. For instance, the majority of VLMs struggle with maintaining symmetric similarity scores regardless of order. Additionally, our results show that the performance of VLMs on the metrics in PairBench closely correlates with popular benchmarks, showcasing its predictive power in ranking models.