🤖 AI Summary
This paper investigates the decision-making complexity in non-stationary reward settings with fixed environment dynamics (i.e., mixed environments). To address this, we propose the Decision Estimation Coefficient (DEC) framework, which provides the first unified and precise quantitative characterization of learning complexity for both model-based and model-free methods in such settings. We further design a tunable model aggregation mechanism based on hypothesis subsets, enabling theoretically controllable trade-offs between estimation accuracy and decision-making overhead. Extending the DEC theory to adversarial reward settings, we derive tight upper bounds on complexity; these recover classical results under stochastic rewards and establish a novel paradigm for designing general-purpose reinforcement learning algorithms. The framework bridges theoretical analysis and practical algorithm design by unifying complexity analysis across learning paradigms and reward models, while explicitly quantifying the interplay among estimation error, model capacity, and decision cost.
📝 Abstract
Recent work by Foster et al. (2021, 2022, 2023) and Xu and Zeevi (2023) developed the framework of decision estimation coefficient (DEC) that characterizes the complexity of general online decision making problems and provides a general algorithm design principle. These works, however, either focus on the pure stochastic regime where the world remains fixed over time, or the pure adversarial regime where the world arbitrarily changes over time. For the hybrid regime where the dynamics of the world is fixed while the reward arbitrarily changes, they only give pessimistic bounds on the decision complexity. In this work, we propose a general extension of DEC that more precisely characterizes this case. Besides applications in special cases, our framework leads to a flexible algorithm design where the learner learns over subsets of the hypothesis set, trading estimation complexity with decision complexity, which could be of independent interest. Our work covers model-based learning and model-free learning in the hybrid regime, with a newly proposed extension of the bilinear classes (Du et al., 2021) to the adversarial-reward case. We also recover some existing model-free learning results in the pure stochastic regime.