🤖 AI Summary
This study addresses the limitations of existing information-theoretic metrics—such as Policy Information Capacity (PIC) and Policy Option Information Capacity (POIC)—which rely on random policies and often yield counterintuitive assessments of task complexity in non-tabular reinforcement learning settings. For instance, these metrics may erroneously suggest that a two-link robotic manipulation task is simpler than a single-link one, or that sparse-reward tasks are easier than dense-reward counterparts. To rigorously evaluate such measures, the authors construct a suite of robotic manipulation tasks with incrementally increasing difficulty and systematically assess the effectiveness of approaches like Random Weight Guessing (RWG) under both dense and sparse reward conditions. Empirical results expose fundamental flaws in PIC and POIC, highlighting the urgent need for more reliable frameworks to quantify task complexity in reinforcement learning.
📝 Abstract
Reinforcement learning (RL) has enabled major advances in fields such as robotics and natural language processing. A key challenge in RL is measuring task complexity, which is essential for creating meaningful benchmarks and designing effective curricula. While there are numerous well-established metrics for assessing task complexity in tabular settings, relatively few exist in non-tabular domains. These include (i) Statistical analysis of the performance of random policies via Random Weight Guessing (RWG), and (ii) information-theoretic metrics Policy Information Capacity (PIC) and Policy-Optimal Information Capacity (POIC), which are reliant on RWG. In this paper, we evaluate these methods using progressively difficult robotic manipulation setups, with known relative complexity, with both dense and sparse reward formulations. Our empirical results reveal that measuring complexity is still nuanced. Specifically, under the same reward formulation, PIC suggests that a two-link robotic arm setup is easier than a single-link setup - which contradicts the robotic control and empirical RL perspective whereby the two-link setup is inherently more complex. Likewise, for the same setup, POIC estimates that tasks with sparse rewards are easier than those with dense rewards. Thus, we show that both PIC and POIC contradict typical understanding and empirical results from RL. These findings highlight the need to move beyond RWG-based metrics towards better metrics that can more reliably capture task complexity in non-tabular RL with our task framework as a starting point.