🤖 AI Summary
Large Vision-Language Models (LVLMs) commonly suffer from hallucination—specifically, image-text response inconsistency. Existing instruction-tuning methods rely on generic high-quality datasets and overlook the model-specific distribution of hallucination concepts across different LVLMs, limiting their mitigation efficacy. This work is the first to reveal and quantify the concept-level model specificity of hallucinations in LVLMs. Based on this insight, we propose Directed Fine-Tuning Generation (DFTG), a two-stage framework: (1) multimodal response analysis and image-text consistency diagnosis to precisely identify the target model’s hallucination patterns; and (2) controllable synthesis of targeted instruction data guided by the diagnosis, followed by lightweight fine-tuning. Experiments demonstrate that DFTG significantly outperforms baselines such as LRV-Instruction across multiple hallucination evaluation benchmarks, substantially reducing hallucination rates and improving cross-modal faithfulness.
📝 Abstract
Despite achieving outstanding performance on various cross-modal tasks, current large vision-language models (LVLMs) still suffer from hallucination issues, manifesting as inconsistencies between their generated responses and the corresponding images. Prior research has implicated that the low quality of instruction data, particularly the skewed balance between positive and negative samples, is a significant contributor to model hallucinations. Recently, researchers have proposed high-quality instruction datasets, such as LRV-Instruction, to mitigate model hallucination. Nonetheless, our investigation reveals that hallucinatory concepts from different LVLMs exhibit specificity, i.e. the distribution of hallucinatory concepts varies significantly across models. Existing datasets did not consider the hallucination specificity of different models in the design processes, thereby diminishing their efficacy in mitigating model hallucination. In this paper, we propose a targeted instruction data generation framework named DFTG that tailored to the hallucination specificity of different models. Concretely, DFTG consists of two stages: hallucination diagnosis, which extracts the necessary information from the model's responses and images for hallucination diagnosis; and targeted data generation, which generates targeted instruction data based on diagnostic results. The experimental results on hallucination benchmarks demonstrate that the targeted instruction data generated by our method are more effective in mitigating hallucinations compared to previous datasets.