๐ค AI Summary
Low sample efficiency in constructing surrogate models for high-dimensional deterministic simulations hinders practical deployment. Method: This paper proposes a reinforcement learningโguided hybrid active sampling framework that integrates stochastic exploration, expert trajectory replay, and maximum-entropy policy optimization to achieve comprehensive and efficient coverage of the state space; it further couples Kriging-based surrogate modeling with an active learning mechanism to dynamically select informative sampling points. Contribution/Results: Evaluated on multiple simulation benchmarks, the method significantly improves surrogate model accuracy and generalization capability. It achieves 30โ50% higher sample efficiency compared to conventional approaches such as Latin hypercube sampling. By enabling more effective data utilization in computationally expensive simulation environments, the proposed framework establishes a novel paradigm for surrogate-assisted reinforcement learning in real-world simulation-intensive applications.
๐ Abstract
Sample efficiency in the face of computationally expensive simulations is a common concern in surrogate modeling. Current strategies to minimize the number of samples needed are not as effective in simulated environments with wide state spaces. As a response to this challenge, we propose a novel method to efficiently sample simulated deterministic environments by using policies trained by Reinforcement Learning. We provide an extensive analysis of these surrogate-building strategies with respect to Latin-Hypercube sampling or Active Learning and Kriging, cross-validating performances with all sampled datasets. The analysis shows that a mixed dataset that includes samples acquired by random agents, expert agents, and agents trained to explore the regions of maximum entropy of the state transition distribution provides the best scores through all datasets, which is crucial for a meaningful state space representation. We conclude that the proposed method improves the state-of-the-art and clears the path to enable the application of surrogate-aided Reinforcement Learning policy optimization strategies on complex simulators.