🤖 AI Summary
This work addresses the lack of Bayesian uncertainty quantification and poor calibration under small-sample regimes in deep survival analysis. We introduce, for the first time, rigorous Bayesian inference into continuous-time deep survival modeling. Our method comprises a two-stage nonparametric data augmentation framework that enables flexible modeling of time-varying covariate–hazard relationships, coupled with a closed-form mean-field variational inference algorithm—guaranteeing full conjugacy and strong calibration theoretically. The approach integrates Bayesian neural networks, local linearization, and coordinate ascent optimization. Experiments demonstrate that our model achieves significantly superior calibration compared to existing deep survival methods, while matching or exceeding their discriminative performance on both synthetic and real-world datasets. Moreover, it delivers robust, interpretable uncertainty estimates for survival functions.
📝 Abstract
We introduce NeuralSurv, the first deep survival model to incorporate Bayesian uncertainty quantification. Our non-parametric, architecture-agnostic framework flexibly captures time-varying covariate-risk relationships in continuous time via a novel two-stage data-augmentation scheme, for which we establish theoretical guarantees. For efficient posterior inference, we introduce a mean-field variational algorithm with coordinate-ascent updates that scale linearly in model size. By locally linearizing the Bayesian neural network, we obtain full conjugacy and derive all coordinate updates in closed form. In experiments, NeuralSurv delivers superior calibration compared to state-of-the-art deep survival models, while matching or exceeding their discriminative performance across both synthetic benchmarks and real-world datasets. Our results demonstrate the value of Bayesian principles in data-scarce regimes by enhancing model calibration and providing robust, well-calibrated uncertainty estimates for the survival function.