π€ AI Summary
This paper investigates the distinct theoretical roles of KL regularization in contextual bandits versus online Reinforcement Learning from Human Feedback (RLHF), and its interplay with data coverage. Method: We establish the first tight theoretical analysis framework for KL regularization in RLHF, introducing a two-stage hybrid sampling strategy that explicitly leverages reference policy coverage. Contribution/Results: Our analysis demonstrates that KL regularization reduces sample complexity from the standard $O(1/varepsilon^2)$ to $O(1/varepsilon)$βwithout requiring explicit exploration policies or strong structural assumptions on the reward function. Crucially, we identify that the data coverage induced by the reference policy directly governs online RLHF efficiency; our hybrid sampling strategy achieves sample complexity that depends additively on the coverage coefficient. The work unifies KL regularization and data coverage into a coherent theoretical framework, yielding a novel paradigm for efficient and robust human-feedback-driven policy optimization.
π Abstract
Reverse-Kullback-Leibler (KL) regularization has emerged to be a predominant technique used to enhance policy optimization in reinforcement learning (RL) and reinforcement learning from human feedback (RLHF), which forces the learned policy to stay close to a reference policy. While the effectiveness and necessity of KL-regularization have been empirically demonstrated in various practical scenarios, current theoretical analysis of KL-regularized RLHF still obtains the same $mathcal{O}(1 / epsilon^2)$ sample complexity as problems without KL-regularization. To understand the fundamental distinction between policy learning objectives with KL-regularization and ones without KL-regularization, we are the first to theoretically demonstrate the power of KL-regularization by providing a sharp analysis for KL-regularized contextual bandits and RLHF, revealing an $mathcal{O}(1 / epsilon)$ sample complexity when $epsilon$ is sufficiently small. We further explore the role of data coverage in contextual bandits and RLHF. While the coverage assumption is commonly employed in offline RLHF to link the samples from the reference policy to the optimal policy, often at the cost of a multiplicative dependence on the coverage coefficient, its impact on the sample complexity of online RLHF remains unclear. Previous theoretical analyses of online RLHF typically require explicit exploration and additional structural assumptions on the reward function class. In contrast, we show that with sufficient coverage from the reference policy, a simple two-stage mixed sampling strategy can achieve a sample complexity with only an additive dependence on the coverage coefficient. Our results provide a comprehensive understanding of the roles of KL-regularization and data coverage in RLHF, shedding light on the design of more efficient RLHF algorithms.