π€ AI Summary
This work addresses the reward hacking problem in adaptive tutoring systems caused by optimizing short-term signals, proposing a reinforcement learning approach that jointly optimizes long-term learning outcomes while respecting instructional safety constraints. By modeling teaching safety as a mastery-state-dependent set of admissible actions, the method introduces mastery-conditioned feasibility into constrained Markov decision processes for the first time. A two-timescale primal-dual algorithm is developed, integrating structured action masking with constrained policy optimization to dynamically restrict policy outputs according to the learnerβs knowledge state. Theoretically, the approach is proven to strictly outperform post-hoc filtering within the feasible set and guarantees both feasibility and convergence of policy iteration. Experiments demonstrate that the method effectively satisfies safety constraints, reduces discounted safety cost, and significantly lowers the reward hacking severity index, outperforming unconstrained and reward-shaping baselines.
π Abstract
Engagement-optimized adaptive tutoring systems may prioritize short-term behavioral signals over sustained learning outcomes, creating structural incentives for reward hacking in reinforcement learning policies. We formalize this challenge as a constrained Markov decision process (CMDP) with mastery-conditioned feasibility, in which pedagogical safety constraints dynamically restrict admissible actions according to learner mastery and prerequisite structure.
We introduce Mastery-Conditioned Constrained Policy Optimization (MC-CPO), a two-timescale primal-dual algorithm that integrates structural action masking with constrained policy optimization. In the tabular regime, we establish feasibility preservation and convergence to stationary feasible points under standard stochastic approximation conditions and derive a safety gap result showing that optimization within the mastery-conditioned feasible set can strictly dominate post-hoc filtering under identical safety budgets.
Empirical validation is conducted in minimal and extended tabular environments and in a neural tutoring setting. Across 10 random seeds and one million training steps in the neural regime, MC-CPO satisfies constraint budgets within tolerance, reduces discounted safety costs relative to unconstrained and reward-shaped baselines, and substantially lowers the Reward Hacking Severity Index (RHSI).
These results indicate that embedding pedagogical structure directly into the feasible action space provides a principled foundation for mitigating reward hacking in instructional reinforcement learning systems.