Off-Policy Evaluation and Learning for Survival Outcomes under Censoring

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of off-policy evaluation under right-censored survival outcomes, where existing methods are prone to systematic bias and yield inaccurate policy value estimates. To mitigate this issue, the study introduces inverse probability of censoring weighting (IPCW) into off-policy evaluation for the first time, proposing two novel estimators—IPCW-IPS and IPCW-DR—that are both unbiased and doubly robust, effectively correcting for censoring-induced bias. Furthermore, the proposed framework naturally extends to policy optimization under budget constraints. Experimental results on both synthetic and real-world datasets demonstrate that the method substantially improves the accuracy of policy evaluation and enhances learning performance in censored environments.

Technology Category

Application Category

📝 Abstract
Optimizing survival outcomes, such as patient survival or customer retention, is a critical objective in data-driven decision-making. Off-Policy Evaluation~(OPE) provides a powerful framework for assessing such decision-making policies using logged data alone, without the need for costly or risky online experiments in high-stakes applications. However, typical estimators are not designed to handle right-censored survival outcomes, as they ignore unobserved survival times beyond the censoring time, leading to systematic underestimation of the true policy performance. To address this issue, we propose a novel framework for OPE and Off-Policy Learning~(OPL) tailored for survival outcomes under censoring. Specifically, we introduce IPCW-IPS and IPCW-DR, which employ the Inverse Probability of Censoring Weighting technique to explicitly deal with censoring bias. We theoretically establish that our estimators are unbiased and that IPCW-DR achieves double robustness, ensuring consistency if either the propensity score or the outcome model is correct. Furthermore, we extend this framework to constrained OPL to optimize policy value under budget constraints. We demonstrate the effectiveness of our proposed methods through simulation studies and illustrate their practical impacts using public real-world data for both evaluation and learning tasks.
Problem

Research questions and friction points this paper is trying to address.

Off-Policy Evaluation
Survival Outcomes
Censoring
Right-Censoring
Policy Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Off-Policy Evaluation
Survival Analysis
Censoring
Inverse Probability of Censoring Weighting
Double Robustness
K
Kohsuke Kubota
NTT DOCOMO, INC.
M
Mitsuhiro Takahashi
NTT DOCOMO, INC.
Yuta Saito
Yuta Saito
Unknown affiliation
machine learningcausal inferencerecommender systemsinformation retrieval