🤖 AI Summary
Multimodal large language models (MLLMs) suffer from object hallucination—incorrectly identifying non-existent objects in images. Existing static benchmarks fail to uncover model-specific or previously unknown vulnerabilities. This paper introduces GHOST, the first fully automated, human-free hallucination induction framework. GHOST employs black-box optimization of image embeddings to guide conditional diffusion models in generating visually natural yet subtly misleading images, enabling targeted stress-testing of MLLMs. The method supports both diagnostic analysis and mitigation, and enables cross-model transferable attacks. Evaluated on mainstream MLLMs, GHOST achieves an average hallucination success rate of 28.1%, substantially surpassing prior approaches (~1%). For cross-model attacks—e.g., against GPT-4o—the success rate reaches 66.5%. Human evaluation confirms that generated images are high-quality and contain no target objects, validating their stealth and efficacy.
📝 Abstract
Object hallucination in Multimodal Large Language Models (MLLMs) is a persistent failure mode that causes the model to perceive objects absent in the image. This weakness of MLLMs is currently studied using static benchmarks with fixed visual scenarios, which preempts the possibility of uncovering model-specific or unanticipated hallucination vulnerabilities. We introduce GHOST (Generating Hallucinations via Optimizing Stealth Tokens), a method designed to stress-test MLLMs by actively generating images that induce hallucination. GHOST is fully automatic and requires no human supervision or prior knowledge. It operates by optimizing in the image embedding space to mislead the model while keeping the target object absent, and then guiding a diffusion model conditioned on the embedding to generate natural-looking images. The resulting images remain visually natural and close to the original input, yet introduce subtle misleading cues that cause the model to hallucinate. We evaluate our method across a range of models, including reasoning models like GLM-4.1V-Thinking, and achieve a hallucination success rate exceeding 28%, compared to around 1% in prior data-driven discovery methods. We confirm that the generated images are both high-quality and object-free through quantitative metrics and human evaluation. Also, GHOST uncovers transferable vulnerabilities: images optimized for Qwen2.5-VL induce hallucinations in GPT-4o at a 66.5% rate. Finally, we show that fine-tuning on our images mitigates hallucination, positioning GHOST as both a diagnostic and corrective tool for building more reliable multimodal systems.