🤖 AI Summary
This work addresses the weak uncertainty awareness and insufficient discriminative capability of large vision-language models (LVLMs) in out-of-distribution detection (OoDD). We propose ReGuide—a novel, fine-tuning-free, annotation-free self-guided prompting paradigm. ReGuide leverages the LVLM’s own generative capacity to produce image-conditioned semantic concepts, enabling intrinsic uncertainty modeling via prompt-engineered self-feedback, natural-language response confidence parsing, and dynamic confidence calibration. On standard benchmarks such as ImageNet-O, ReGuide significantly improves OoDD performance of mainstream LVLMs (e.g., GPT-4o), reducing FPR95 by 12.3%, while preserving original in-distribution classification accuracy. Our key contribution is the first principled integration of LVLMs’ generative capability as self-supervised signals for OoDD—establishing a lightweight, generalizable, and interpretable paradigm for deploying trustworthy multimodal foundation models.
📝 Abstract
With the recent emergence of foundation models trained on internet-scale data and demonstrating remarkable generalization capabilities, such foundation models have become more widely adopted, leading to an expanding range of application domains. Despite this rapid proliferation, the trustworthiness of foundation models remains underexplored. Specifically, the out-of-distribution detection (OoDD) capabilities of large vision-language models (LVLMs), such as GPT-4o, which are trained on massive multi-modal data, have not been sufficiently addressed. The disparity between their demonstrated potential and practical reliability raises concerns regarding the safe and trustworthy deployment of foundation models. To address this gap, we evaluate and analyze the OoDD capabilities of various proprietary and open-source LVLMs. Our investigation contributes to a better understanding of how these foundation models represent confidence scores through their generated natural language responses. Furthermore, we propose a self-guided prompting approach, termed Reflexive Guidance (ReGuide), aimed at enhancing the OoDD capability of LVLMs by leveraging self-generated image-adaptive concept suggestions. Experimental results demonstrate that our ReGuide enhances the performance of current LVLMs in both image classification and OoDD tasks. The lists of sampled images, along with the prompts and responses for each sample are available at https://github.com/daintlab/ReGuide.