π€ AI Summary
In vision generation post-training, GRPO suffers from ambiguous and weak reward signals due to many-to-many textβvision mappings, leading to overfitting on noisy rewards. To address this, we propose Bayes-GRPO, the first GRPO framework incorporating Bayesian uncertainty modeling: it quantifies reward uncertainty via semantic priors, introduces a two-level dynamic trust-weighting mechanism (inter-group and intra-group), and employs prior anchoring and intra-group renormalization to enhance semantic consistency. Evaluated on both image and video generation tasks, Bayes-GRPO significantly improves semantic alignment and perceptual quality, accelerates convergence, and consistently outperforms standard GRPO and existing variants across all metrics.
π Abstract
Group Relative Policy Optimization (GRPO) has emerged as an effective and lightweight framework for post-training visual generative models. However, its performance is fundamentally limited by the ambiguity of textual visual correspondence: a single prompt may validly describe diverse visual outputs, and a single image or video may support multiple equally correct interpretations. This many to many relationship leads reward models to generate uncertain and weakly discriminative signals, causing GRPO to underutilize reliable feedback and overfit noisy ones. We introduce Bayesian Prior-Guided Optimization (BPGO), a novel extension of GRPO that explicitly models reward uncertainty through a semantic prior anchor. BPGO adaptively modulates optimization trust at two levels: inter-group Bayesian trust allocation emphasizes updates from groups consistent with the prior while down-weighting ambiguous ones, and intra-group prior-anchored renormalization sharpens sample distinctions by expanding confident deviations and compressing uncertain scores. Across both image and video generation tasks, BPGO delivers consistently stronger semantic alignment, enhanced perceptual fidelity, and faster convergence than standard GRPO and recent variants.