🤖 AI Summary
This work establishes the theoretical foundation of success-conditioned policy improvement by rigorously showing, for the first time, that success conditioning is equivalent to a trust-region optimization problem constrained by a χ²-divergence, where the trust-region radius is adaptively determined by the data, enabling conservative yet effective policy updates. The study derives an exact analytical relationship among the magnitude of policy improvement, the extent of policy change, and action influence, revealing that the method inherently avoids performance degradation and hazardous distributional shifts—failing gracefully by leaving the policy nearly unchanged when improvement is unattainable. Furthermore, it demonstrates that conventional return-thresholding approaches, while amplifying improvement signals, may inadvertently deviate from the true optimization objective.
📝 Abstract
A widely used technique for improving policies is success conditioning, in which one collects trajectories, identifies those that achieve a desired outcome, and updates the policy to imitate the actions taken along successful trajectories. This principle appears under many names -- rejection sampling with SFT, goal-conditioned RL, Decision Transformers -- yet what optimization problem it solves, if any, has remained unclear. We prove that success conditioning exactly solves a trust-region optimization problem, maximizing policy improvement subject to a $\chi^2$ divergence constraint whose radius is determined automatically by the data. This yields an identity: relative policy improvement, the magnitude of policy change, and a quantity we call action-influence -- measuring how random variation in action choices affects success rates -- are exactly equal at every state. Success conditioning thus emerges as a conservative improvement operator. Exact success conditioning cannot degrade performance or induce dangerous distribution shift, but when it fails, it does so observably, by hardly changing the policy at all. We apply our theory to the common practice of return thresholding, showing this can amplify improvement, but at the cost of potential misalignment with the true objective.