🤖 AI Summary
Existing preference alignment methods (e.g., DPO) model only the population-level average preference, neglecting individual belief heterogeneity—leading to marginalization of minority preferences by dominant opinions. To address this, we propose a belief-driven distributed preference alignment framework: (1) we formally model the population belief distribution; (2) we design a belief-conditioned preference loss function; and (3) we establish a distribution-matching optimization framework. Our approach enables fine-grained, inclusive alignment with the full preference distribution—not merely its mean—thereby supporting both modeling and calibration of preference diversity. Evaluated on synthetic opinion generation and real-world movie review data, our method reduces distribution alignment error by over 40% compared to baselines (DPO, KTO, etc.) and achieves significant improvements across multiple metrics. This work introduces a novel paradigm for fairness- and diversity-aware alignment of large language models.
📝 Abstract
Preferences within a group of people are not uniform but follow a distribution. While existing alignment methods like Direct Preference Optimization (DPO) attempt to steer models to reflect human preferences, they struggle to capture the distributional pluralistic preferences within a group. These methods often skew toward dominant preferences, overlooking the diversity of opinions, especially when conflicting preferences arise. To address this issue, we propose Group Distribution Preference Optimization (GDPO), a novel framework that aligns language models with the distribution of preferences within a group by incorporating the concept of beliefs that shape individual preferences. GDPO calibrates a language model using statistical estimation of the group's belief distribution and aligns the model with belief-conditioned preferences, offering a more inclusive alignment framework than traditional methods. In experiments using both synthetic controllable opinion generation and real-world movie review datasets, we show that DPO fails to align with the targeted belief distributions, while GDPO consistently reduces this alignment gap during training. Moreover, our evaluation metrics demonstrate that GDPO outperforms existing approaches in aligning with group distributional preferences, marking a significant advance in pluralistic alignment.