On the Partial Identifiability in Reward Learning: Choosing the Best Reward

📅 2025-01-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the optimal reward selection problem in reward learning when the target reward function is only partially identifiable. To overcome the limitation of conventional methods—which restrict reward selection to a feasible set and thereby hinder downstream policy generalization—we propose a novel “out-of-feasible-set reward selection” paradigm. We first establish a quantitative identifiability modeling framework grounded in convex analysis and optimization theory, and rigorously prove that there exist rewards outside the feasible set that strictly dominate all feasible ones. Building on this insight, we design three provably efficient algorithms tailored for reward transfer, each accompanied by theoretical guarantees on convergence and sample efficiency. Experiments across multi-task reward transfer benchmarks demonstrate that our approach significantly improves policy generalization performance compared to state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
In Reward Learning (ReL), we are given feedback on an unknown *target reward*, and the goal is to use this information to find it. When the feedback is not informative enough, the target reward is only *partially identifiable*, i.e., there exists a set of rewards (the feasible set) that are equally-compatible with the feedback. In this paper, we show that there exists a choice of reward, non-necessarily contained in the feasible set that, *depending on the ReL application*, improves the performance w.r.t. selecting the reward arbitrarily among the feasible ones. To this aim, we introduce a new *quantitative framework* to analyze ReL problems in a simple yet expressive way. We exemplify the framework in a *reward transfer* use case, for which we devise three provably-efficient ReL algorithms.
Problem

Research questions and friction points this paper is trying to address.

Reward Learning
Partial Observability
Optimal Reward Selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Partially Observable Reward Learning
Decision-making under Uncertainty
Effective ReL Algorithms
🔎 Similar Papers
2024-04-122024 IEEE Intelligent Vehicles Symposium (IV)Citations: 8