Learning What to Trust: Bayesian Prior-Guided Optimization for Visual Generation

πŸ“… 2025-11-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In vision generation post-training, GRPO suffers from ambiguous and weak reward signals due to many-to-many text–vision mappings, leading to overfitting on noisy rewards. To address this, we propose Bayes-GRPO, the first GRPO framework incorporating Bayesian uncertainty modeling: it quantifies reward uncertainty via semantic priors, introduces a two-level dynamic trust-weighting mechanism (inter-group and intra-group), and employs prior anchoring and intra-group renormalization to enhance semantic consistency. Evaluated on both image and video generation tasks, Bayes-GRPO significantly improves semantic alignment and perceptual quality, accelerates convergence, and consistently outperforms standard GRPO and existing variants across all metrics.

Technology Category

Application Category

πŸ“ Abstract
Group Relative Policy Optimization (GRPO) has emerged as an effective and lightweight framework for post-training visual generative models. However, its performance is fundamentally limited by the ambiguity of textual visual correspondence: a single prompt may validly describe diverse visual outputs, and a single image or video may support multiple equally correct interpretations. This many to many relationship leads reward models to generate uncertain and weakly discriminative signals, causing GRPO to underutilize reliable feedback and overfit noisy ones. We introduce Bayesian Prior-Guided Optimization (BPGO), a novel extension of GRPO that explicitly models reward uncertainty through a semantic prior anchor. BPGO adaptively modulates optimization trust at two levels: inter-group Bayesian trust allocation emphasizes updates from groups consistent with the prior while down-weighting ambiguous ones, and intra-group prior-anchored renormalization sharpens sample distinctions by expanding confident deviations and compressing uncertain scores. Across both image and video generation tasks, BPGO delivers consistently stronger semantic alignment, enhanced perceptual fidelity, and faster convergence than standard GRPO and recent variants.
Problem

Research questions and friction points this paper is trying to address.

Addresses ambiguity in text-visual correspondence for generative models
Mitigates reward model uncertainty in visual generation optimization
Resolves overfitting to noisy feedback in policy optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models reward uncertainty via semantic prior anchor
Modulates trust at inter-group and intra-group levels
Enhances semantic alignment and perceptual fidelity
πŸ”Ž Similar Papers
No similar papers found.