🤖 AI Summary
This work addresses the poor robustness of reward modeling in reinforcement learning (RL) under multi-user preference feedback. We propose a unified framework integrating preference-based RL with unsupervised crowdsourcing modeling. Our approach is the first to incorporate Bayesian crowdsourcing modeling into preference-based RL, enabling automatic inference of latent variables that capture inter-user reliability differences and minority viewpoints—without requiring prior knowledge or expert identity annotations. Under heterogeneous user error distributions, the learned policy significantly outperforms both single-user training and majority-voting baselines. Moreover, our method uncovers minority viewpoints in a fully unsupervised manner, thereby enhancing the generalizability and fairness of population-aligned behavior. The framework improves reward model robustness by jointly modeling user reliability and preference uncertainty, leading to more reliable policy optimization in socially embedded RL settings.
📝 Abstract
Preference-based reinforcement learning (RL) provides a framework to train AI agents using human feedback through preferences over pairs of behaviors, enabling agents to learn desired behaviors when it is difficult to specify a numerical reward function. While this paradigm leverages human feedback, it typically treats the feedback as given by a single human user. However, different users may desire multiple AI behaviors and modes of interaction. Meanwhile, incorporating preference feedback from crowds (i.e. ensembles of users) in a robust manner remains a challenge, and the problem of training RL agents using feedback from multiple human users remains understudied. In this work, we introduce a conceptual framework, Crowd-PrefRL, that integrates preference-based RL approaches with techniques from unsupervised crowdsourcing to enable training of autonomous system behaviors from crowdsourced feedback. We show preliminary results suggesting that Crowd-PrefRL can learn reward functions and agent policies from preference feedback provided by crowds of unknown expertise and reliability. We also show that in most cases, agents trained with Crowd-PrefRL outperform agents trained with majority-vote preferences or preferences from any individual user, especially when the spread of user error rates among the crowd is large. Results further suggest that our method can identify the presence of minority viewpoints within the crowd in an unsupervised manner.