Learning Utilities from Demonstrations in Markov Decision Processes

📅 2024-09-25
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-standing limitation of inverse reinforcement learning (IRL), which assumes risk-neutral agents and thus fails to infer true risk preferences from behavioral demonstrations. We propose utility learning (UL) as a novel paradigm: for the first time, we explicitly model and infer risk-sensitive utility functions—within the Markov decision process (MDP) framework—directly from expert demonstrations. Theoretically, we establish a partial identifiability theory for utility functions and prove finite-sample convergence of our algorithms. Methodologically, we design two efficient UL algorithms with provable sample complexity guarantees. Empirical results demonstrate that our approach accurately recovers the structural form of the underlying utility function and precisely characterizes human subjects’ risk attitudes—distinguishing between risk aversion and risk seeking—with significant improvements over conventional IRL models.

Technology Category

Application Category

📝 Abstract
Our goal is to extract useful knowledge from demonstrations of behavior in sequential decision-making problems. Although it is well-known that humans commonly engage in risk-sensitive behaviors in the presence of stochasticity, most Inverse Reinforcement Learning (IRL) models assume a risk-neutral agent. Beyond introducing model misspecification, these models do not directly capture the risk attitude of the observed agent, which can be crucial in many applications. In this paper, we propose a novel model of behavior in Markov Decision Processes (MDPs) that explicitly represents the agent's risk attitude through a utility function. We then define the Utility Learning (UL) problem as the task of inferring the observed agent's risk attitude, encoded via a utility function, from demonstrations in MDPs, and we analyze the partial identifiability of the agent's utility. Furthermore, we devise two provably efficient algorithms for UL in a finite-data regime, and we analyze their sample complexity. We conclude with proof-of-concept experiments that empirically validate both our model and our algorithms.
Problem

Research questions and friction points this paper is trying to address.

Extracting risk attitudes from agent demonstrations in MDPs
Learning utility functions to model risk-sensitive behaviors
Developing algorithms for efficient utility inference from data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models agent's risk attitude via utility function
Introduces Utility Learning for risk inference
Develops efficient algorithms with sample analysis
🔎 Similar Papers
2024-04-10Neural Information Processing SystemsCitations: 1