🤖 AI Summary
In sparse-reward environments, reinforcement learning agents suffer from inefficient exploration and poor adaptability. This paper systematically evaluates four diversity-oriented intrinsic reward mechanisms—State Counting, Intrinsic Curiosity Module (ICM), Maximum Entropy, and Diversity Is All You Need (DIAYN)—within the MiniGrid benchmark. It is the first empirical study to reveal a non-monotonic relationship between the *diversity level* targeted (state-, policy-, or skill-level) and exploration performance: State Counting excels under low-dimensional observations but degrades severely with RGB inputs; Maximum Entropy demonstrates superior robustness across modalities; DIAYN fails to improve practical exploration efficiency due to difficulties in skill acquisition and its bias toward behavioral discrimination over state coverage. Evaluation employs multi-faceted metrics—including observation coverage, positional coverage, policy entropy, and time-to-sparse-reward—yielding novel insights into *hierarchical compatibility* for intrinsic reward design.
📝 Abstract
One of the open challenges in Reinforcement Learning is the hard exploration problem in sparse reward environments. Various types of intrinsic rewards have been proposed to address this challenge by pushing towards diversity. This diversity might be imposed at different levels, favouring the agent to explore different states, policies or behaviours (State, Policy and Skill level diversity, respectively). However, the impact of diversity on the agent's behaviour remains unclear. In this work, we aim to fill this gap by studying the effect of different levels of diversity imposed by intrinsic rewards on the exploration patterns of RL agents. We select four intrinsic rewards (State Count, Intrinsic Curiosity Module (ICM), Maximum Entropy, and Diversity is all you need (DIAYN)), each pushing for a different diversity level. We conduct an empirical study on MiniGrid environment to compare their impact on exploration considering various metrics related to the agent's exploration, namely: episodic return, observation coverage, agent's position coverage, policy entropy, and timeframes to reach the sparse reward. The main outcome of the study is that State Count leads to the best exploration performance in the case of low-dimensional observations. However, in the case of RGB observations, the performance of State Count is highly degraded mostly due to representation learning challenges. Conversely, Maximum Entropy is less impacted, resulting in a more robust exploration, despite being not always optimal. Lastly, our empirical study revealed that learning diverse skills with DIAYN, often linked to improved robustness and generalisation, does not promote exploration in MiniGrid environments. This is because: i) learning the skill space itself can be challenging, and ii) exploration within the skill space prioritises differentiating between behaviours rather than achieving uniform state visitation.