🤖 AI Summary
Generative flow networks rely on approximate reward functions to construct high-reward objects, but noisy data induce epistemic uncertainty in reward estimation, compromising policy reliability. To address this, we propose a surrogate-model-based method for quantifying policy uncertainty: we innovatively employ Polynomial Chaos Expansion (PCE) to establish an analytical mapping between low-dimensional reward parameters and the action distribution; this is combined with lightweight model ensembling and Monte Carlo sampling to efficiently characterize policy sensitivity to reward uncertainty. Evaluated on discrete/continuous grid worlds, symbolic regression, and Bayesian structure learning tasks, our approach achieves high-accuracy, interpretable policy uncertainty estimation while substantially reducing computational cost. It establishes a novel paradigm for trustworthy generative flow modeling.
📝 Abstract
Generative flow networks are able to sample, via sequential construction, high-reward, complex objects according to a reward function. However, such reward functions are often estimated approximately from noisy data, leading to epistemic uncertainty in the learnt policy. We present an approach to quantify this uncertainty by constructing a surrogate model composed of a polynomial chaos expansion, fit on a small ensemble of trained flow networks. This model learns the relationship between reward functions, parametrised in a low-dimensional space, and the probability distributions over actions at each step along a trajectory of the flow network. The surrogate model can then be used for inexpensive Monte Carlo sampling to estimate the uncertainty in the policy given uncertain rewards. We illustrate the performance of our approach on a discrete and continuous grid-world, symbolic regression, and a Bayesian structure learning task.