🤖 AI Summary
Large multimodal models (LMMs) suffer from modality imbalance, where strong linguistic priors overwhelm visual inputs, leading to poor generalization and frequent hallucinations. Existing preference optimization methods struggle to suppress inherent biases in the LLM backbone and rely on static offline data, lacking adaptability to distributional shifts during training; although Generalized Reinforcement Learning from Preference Optimization (GRPO) has shown efficacy in unimodal reasoning alignment, it remains unexplored for LMM alignment. This paper proposes Modality-Balanced Preference Optimization (MBPO), a novel framework that integrates offline adversarial negative sampling—generating hard negative samples biased toward language priors via image perturbations—with online GRPO-based validation rewards. MBPO further introduces closed-form task reward evaluation, multi-stage preference data construction, and joint optimization. Extensive experiments demonstrate significant performance gains across multiple benchmarks, substantial hallucination reduction, and consistent superiority over state-of-the-art preference optimization approaches.
📝 Abstract
The task adaptation and alignment of Large Multimodal Models (LMMs) have been significantly advanced by instruction tuning and further strengthened by recent preference optimization. Yet, most LMMs still suffer from severe modality imbalance during reasoning, i.e., outweighing language prior biases over visual inputs, which bottlenecks their generalization to downstream tasks and causes hallucinations. However, existing preference optimization approaches for LMMs do not focus on restraining the internal biases of their Large Language Model (LLM) backbones when curating the training data. Moreover, they heavily rely on offline data and lack the capacity to explore diverse responses adaptive to dynamic distributional shifts during training. Meanwhile, Group Relative Policy Optimization (GRPO), a recent method using online-generated data and verified rewards to improve reasoning capabilities, remains largely underexplored in LMM alignment. In this paper, we propose a novel preference learning framework, Modality-Balancing Preference Optimization (MBPO), to address the modality imbalance in LMMs. MBPO constructs a more effective offline preference dataset by generating hard negatives, i.e., rejected responses misled by LLM biases due to limited usage of visual information, through adversarial perturbation of input images. Moreover, MBPO leverages the easy-to-verify nature of close-ended tasks to generate online responses with verified rewards. GRPO is then employed to train the model with offline-online hybrid data. Extensive experiments demonstrate that MBPO can enhance LMM performance on challenging vision-language tasks and effectively reduce hallucinations.